-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Losing context across elasticsearch requests #72
Comments
I've now confirmed that if I send two concurrent requests, the first will succeed and the second will lose context. The elasticsearch client code calls process.nextTick and after this point the context is lost. |
Elasticsearch is using the http client and reusing sockets (as I believe it should for a database) so my issue may be due to this: #71 |
Internally, most async callbacks are wrapped with Generally, as you said, these sort of issues trace back to some sort of connection pooling or user-mode queuing of some variety. |
I'm using the latest elasticsearch client (v11.0.1) and am running into issues with losing context. With a small number of concurrent requests, it works fine. However, as the load increases and I issue many elasticsearch requests, the context starts getting lost.
I can easily reproduce this in a unit test by dispatching about 25 requests simultaneously and comparing a value in context before and after. The first ten will work, the next 15 won't.
I've been digging into the elasticsearch client code, trying to find why it loses context, but haven't been very successful yet. I assume it has something to do with connection pooling. Is there some way I can wrap my code and basically re-attach the known context when the elasticsearch request returns?
I've tried using my patched Q as the defer handler but am now using the callback in my test to eliminate that as a possibility.
The text was updated successfully, but these errors were encountered: