-
-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing to detect lost connections (asyncio only) #95
Comments
Could you break this down to the simplest possible reproduction? Which URL, what kind of request? Does it reproduce on a range of URLs, or mainly on one specific site. Can you reproduce independently of Starlette, and just demonstrate it standalone? Thanks so much! |
Could be that it’s caused by #5 - will need to replicate, and then do some digging. |
I'm seeing this bug too. Here's a recent traceback (from my Heroku logs):
Here's my code that's making the API call: https://github.com/simonw/datasette-auth-github/blob/b4bbb0ecbc66a67fe6b57dc9c47870fe8fa6c0fc/datasette_auth_github/github_auth.py#L188-L199 # Exchange that code for a token
github_response = (
await self.github_api_client.post(
"https://github.com/login/oauth/access_token",
data={
"client_id": self.client_id,
"client_secret": self.client_secret,
"code": qs["code"],
},
)
).text
parsed = dict(parse_qsl(github_response)) Where I'm going to try instantiating a new |
I changed my code to stop re-using the |
Gotcha - thanks for the report. Seems pretty likely it’d be occurring when attempting to use a connection that’s disconnected, eg due to keep alive timing out on the server side. |
Alrighty. I could recreate this issue by creating a client, issuing a request, then waiting 5+ mins, and making another request. Version 0.6.7 properly deals with the case where an expired connection is obtained from the connection pool. In my testing this resolves the issue. 👍 |
I seem to be having the same sort of issue but I am running 0.7.1 using
I've moved client to create a new AsyncClient() inside of the route for now, am I missing something or is this the same issue as above? |
I'm also getting this issue with 0.7.6 with infrequent calls to an elasticsearch server:
|
Right - I don't think that the issue here is that we're losing the connection, I think it's that we're not detecting that we've lost the connection. When we acquire a connection from the pool we check if it has been closed or not... httpx/httpx/dispatch/connection_pool.py Line 195 in 12d00b2
However it's been noted that the "is closed" detection on asyncio may be flaky... #346 (comment)
|
There are potential mitigations here, such as having a keep-alive time on connections, and assuming that they're expired once the keep-alive is passed, but it'd be good to have a clear handle on if we've got to the root of the cause first. |
How can i fix this problem? May be request retrying if i expect ConnectionResetError? |
@daxartio I'm looking at resolving this issue. |
1 of 5 requests I initiated would raise ConnectionResetError('Connection lost') when calling
AsyncClient
from inside Starlette and Uvicorn.In comparison, urllib, requests and Tornado's AsyncClient works as expected.
Stacktrace:
The text was updated successfully, but these errors were encountered: