-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebsocketClient creates more connections than needed #4904
Comments
Enabled debug logging, what is interesting is that there are 50 of these logs If i change to only 1 connect, I do not see these logs: |
When i increase the max connections per destination to say 100: http.setMaxConnectionsPerDestination(100); I see connection established is 100 even thou I only have 50 websocket client connect() calls |
The correct string is @michaelkwan I'm not sure but I won't be surprised that Can you please verify that? If the problem reproduces with less connections (say 10), please run it and attach the DEBUG logs. |
Attached is the test run with 10 concurrent connect calls. I see 19 established connections and 9 of them do not get clean-up immediately after the run exited. There is no error or exception to indicate any problems. I ran with my own go echo server, this does not seem to happen with such a low count; however, the fact that no error/exception make people think otherwise (why would new connections be created when you have perfectly good established connections?) 2020-05-26 09:33:47.030:DBUG:oejc.AbstractConnectionPool:pool-1-thread-5: newConnection 1/1024 connections 1/-1 pending |
I did additional tests running docker containers on my machine: 2. tested against docker version of the kaazing gateway (which simply echos), reproducible (100 concurrent threads, 119 established connections) (https://hub.docker.com/r/kaazing/gateway) 3. tested against my own go websocket server (100 concurrent threads, 100 established connections). re-test with 200 threads, 277 established connections (https://github.com/gorilla/websocket) They all have no exceptions/stack trace/ error indicated in their debug logs. |
Also this happens to both ws and wss endpoints. |
Websocket is an HTTP connection that was upgraded to a WebSocket connection. |
@michaelkwan thanks for the logs, it's indeed a bug. Since opening connections is slow, we open 2 but still have 8 queued requests waiting for their connection. This is handled differently in
If you want to confirm this, use HttpClientTransportOverHTTP transport = new HttpClientTransportOverHTTP();
transport.setConnectionPoolFactory(destination -> new MultiplexConnectionPool(destination, 64, destination, 1));
HttpClient httpClient = new HttpClient(transport);
... With the snippet above I expect to never open more connections than expected. Let us know if that's correct. |
Fixed connection pool's `acquire()` methods to correctly take into account the number of queued requests. Also fixed a collateral bug in `BufferingResponseListener` - wrong calculation of the max content length. Restored `ConnectionPoolTest` that was disabled in #2540, cleaned it up, and let it run for hours without failures. Signed-off-by: Simone Bordet <[email protected]>
@michaelkwan can you try PR #4911? I believe it'll fix the issue. |
@sbordet I gave it a try but it didn't work. Lower thread count seems to be promising; however, I up the concurrent connects to 500 and saw 528 established connections. See attached logs. Also, when I stop/shutdown http client, those extra connections remained in time_wait state even after the process is completed. |
Is there a reason upgraded connections are even in the Connection Pool? |
@joakime upgraded connection are not in the pool. |
More fixes to the connection pool logic. Now the connection creation is conditional, triggered by explicit send() or failures. The connection creation is not triggered _after_ a send(), where we aggressively send more queued requests - or in release(), where we send queued request after a previous one was completed. Signed-off-by: Simone Bordet <[email protected]>
More fixes to the connection pool logic. Now the connection close/removal aggressively sends more requests triggering the connection creation. Signed-off-by: Simone Bordet <[email protected]>
Improved comments. Signed-off-by: Simone Bordet <[email protected]>
Updates after review: added javadocs. Signed-off-by: Simone Bordet <[email protected]>
Updates after review. Signed-off-by: Simone Bordet <[email protected]>
Updates after review. Signed-off-by: Simone Bordet <[email protected]>
Updates after review. Signed-off-by: Simone Bordet <[email protected]>
Updates after review. Signed-off-by: Simone Bordet <[email protected]>
@michaelkwan so the issue was more complex than expected. We have modified the code to try and minimize the number of connections created in case of concurrent requests. However, to be absolutely accurate (i.e. spawn 500 threads and expect no more than 500 connections) we would need to grab a coarse lock and we decided against that for performance reasons. To constrain the number of connections precisely you can still use We have done a best effort to be more conservative and create less connections than before, but few more may still be created. You should see improvements for your use case. We would like to get a feedback if you can test the latest code. |
After merge fixes. Signed-off-by: Simone Bordet <[email protected]>
@sbordet question: do you still suggest to use MultiplexConnectionPool instead of DuplexConnectionPool for Websocket Client with the latest fix? |
@michaelkwan no, stick with the default, i.e. the |
Fixed MultiplexConnectionPool.acquire() to use the new boolean parameter to decide whether or not create a new connection. This fixes ConnectionPoolTest instability. Signed-off-by: Simone Bordet <[email protected]>
Fixed MaxConcurrentStreamsTest - it was always broken. The problem was that the call to super.onSettings(...) was done _after_ sending the request, so the connection pool was still configured with the default maxMultiplex=1024. Also fixed AbstractConnectionPool to avoid a second call to activate() if we are not trying to create a new connection. Signed-off-by: Simone Bordet <[email protected]>
Jetty version
9.4.28
Java version
1.8.0.242 adopt openjdk
OS type/version
mac
Description
I noticed the websocket client is creating more connections than needed when multiple connect is called concurrently.
When I run the following, I see established count went to 64 (expected 50):
watch -n 0.5 "netstat -an -p tcp | awk '{ print \$4, \$5, \$6 }' | grep ESTABLISHED | grep 174.129.224 | wc -l"
The text was updated successfully, but these errors were encountered: