[Bug] Aiohttp transport creates ClientTimeout with None values, causing indefinite hangs #15659
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bug Description
The aiohttp transport in litellm does not properly propagate timeout parameters, resulting in
ClientTimeoutbeing created with allNonevalues. This allows requests to hang indefinitely during SSL write operations.Impact
🚨 Critical Production Issue:
timeout=60Root Cause
In
litellm/llms/custom_httpx/aiohttp_transport.py:261:The issue:
request.extensions.get("timeout", {})returns{}instead of timeout configuration.Evidence from Production
Stack trace showing 717 second (12 minute) hang:
Full details in Dataflow job:
2025-10-16_14_44_34-5867955337894223011Reproduction
This PR includes:
reproduce_timeout_bug.py- Demonstrates the bug with diagnostic loggingdemonstrate_fix.py- Shows the workaroundTIMEOUT_BUG_REPRODUCTION.md- Complete documentationTo reproduce:
You will see:
This proves that despite passing
timeout=30, aiohttp receives no timeout.Current Workaround
This forces litellm to use httpx's native transport, which correctly propagates timeouts.
Proposed Fix
The aiohttp transport needs to handle different timeout formats:
Related Issues
Checklist
Questions for Maintainers
This is a critical bug causing production failures. Happy to refine the fix based on your guidance.