async_fetch module provides the underlying async HTTP infrastructure for parallel data fetching. It uses niquests (a modern fork of requests) with HTTP/2 multiplexing for optimal performance.
Most users don’t need to interact with this module directly. The
Session class handles async fetching automatically through methods like laps_async() and get_fastest_laps_tels_async().Core Functions
fetch_json_async
year: Season year (e.g., 2025)gp: Grand Prix name (e.g., “Monaco”)session: Session name (e.g., “Race”, “Qualifying”)path: Path to JSON file (e.g., “laps.json”, “drivers.json”)max_retries: Maximum retry attempts. IfNone, uses global configtimeout: Request timeout in seconds. IfNone, uses global configuse_cache: IfTrue, read from cache before network fetchwrite_cache: IfTrue, persist successful responses to cachevalidate_payload: IfTrue, run payload validation before returning data
- Parsed JSON data as a dictionary (never
None, raises on error)
NetworkError: If network request fails after all retriesDataNotFoundError: If data doesn’t exist (404)InvalidDataError: If JSON parsing or validation fails
fetch_multiple_async
requests: List of(year, gp, session, path)tuples to fetchuse_cache: IfTrue, read from cache before network fetchwrite_cache: IfTrue, persist successful responses to cachevalidate_payload: IfTrue, run payload validation before returning datamax_retries: Maximum retry attempts per request. IfNone, uses global configtimeout: Request timeout in seconds per request. IfNone, uses global configmax_concurrent_requests: Maximum concurrent requests. IfNone, uses global config (default: 20)
- List of parsed JSON dictionaries or
Nonefor failed requests, in the same order asrequests
- Does not raise exceptions. Failed requests return
Noneand are logged as warnings DataNotFoundError(404) is silently converted toNone
fetch_with_rate_limit
coro_func: Async function to execute*args: Positional arguments forcoro_funcsemaphore: Optional semaphore for rate limiting. IfNone, creates one based onmax_concurrent_requestsconfig**kwargs: Keyword arguments forcoro_func
- Result from
coro_funcexecution
- Any exception raised by
coro_func
This is a utility function for custom concurrency control. Most users should use
fetch_multiple_async() which handles rate limiting automatically.Resource Management
cleanup_resources
close_session
close_executor
Performance Characteristics
The async fetch system is optimized for:- HTTP/2 multiplexing: Single connection for multiple requests
- Connection pooling: Reuses connections across requests
- Parallel JSON parsing: Offloads JSON parsing to thread pool
- Rate limiting: Prevents overwhelming CDN with concurrent requests
- Automatic retries: Exponential backoff with jitter
- Single lap fetch: ~50-100ms
- 20 driver laps in parallel: ~200-300ms (vs 1-2s sequential)
- Full session telemetry (20 drivers × 50 laps): ~10-15s (vs 50-100s sequential)
Configuration
Async fetch behavior is controlled by global configuration:Error Handling
The async fetch system uses a hierarchy of exceptions:Advanced Usage
Custom concurrency limits
Disable caching for fresh data
Disable validation for performance
Implementation Details
HTTP/2 Multiplexing
HTTP/2 Multiplexing
The async fetch system uses niquests with HTTP/2 support, allowing multiple requests to share a single TCP connection. This dramatically reduces latency for parallel requests.
JSON Parsing Strategy
JSON Parsing Strategy
JSON parsing is offloaded to a thread pool executor (or optional process pool for non-telemetry data) to avoid blocking the async event loop. The system uses
orjson for fast parsing and can parse multiple responses in parallel. Telemetry payloads use thread-based parsing to avoid cross-process IPC overhead.Rate Limiting
Rate Limiting
A semaphore-based rate limiter ensures no more than
max_concurrent_requests requests are in flight simultaneously. This prevents overwhelming the CDN and triggering rate limits. The default is 20 concurrent requests, configurable via global config.Retry Logic
Retry Logic
Failed requests are retried with exponential backoff and jitter. The backoff formula is:
min(backoff_factor^attempt + random(0, jitter_max), max_delay) seconds, where defaults are backoff_factor=2.0, jitter_max=1.0, and max_delay=60.0. The system also includes circuit breaker logic to prevent cascading failures and special handling for connection pool exhaustion.