-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vaadin 23.4.0. UI's response performance degrades after some time #19429
Comments
Hi, thanks for creating the issue. |
@mcollovati, Marco, thanks for the answer. |
It looks like that Atmosphere private boolean hasMessage(String clientId, String messageId) {
ConcurrentLinkedQueue<CacheMessage> clientQueue = messages.get(clientId);
return clientQueue != null && clientQueue.parallelStream().anyMatch(m -> Objects.equals(m.getId(), messageId));
}
The call happens during the execution of a That said, I can't say why the stream is stuck. It looks like the |
Thanks for investigation. |
LongPollingCacheFilter is always executed, but it performs actions only if the transport is long polling. |
This issue seems similar Atmosphere/atmosphere#2262 |
Yes, I saw that topic. |
You could maybe try to copy/paste the UUIDBroadcasterCache and rewrite the hasMessage method to perform the anyMatch on a copy of the list |
Well... Is it sufficient to use |
I don't have an answer for this, sorry. It could help, but it could also only move the problem on a different layer. |
Just thought this approach could switch parallel updates to serial. |
IIRC you have to set it with the servlet init patameter, otherwise Flow will force UUIDBroadcasterCache |
You are right, the following code in
But I cannot find the proper way to force it use my
My class Could you please show me the right way? |
I think you need to set the parameter to the Vaadin servlet. Anyway, I would also investigate why the push connection is downgraded to long polling |
Finally, I got it.
|
Now some investigations:
So, probably we have a Long polling after emulated NGINX restart. |
And more: |
I think this is somehow expected. When pushing changes to the client, a |
This is probably because while Atmosphere is trying to push cached messages to the client, the application is constantly adding other new messages, making the queue never empty.
You can check it in the browser network tab: if the web socket channel is closed, you might see HTTP push requests happen continuously. |
If you are confident that web socket will ALWAYS work for the application clients, you can set @Bean
VaadinServiceInitListener configureWebsocketFallbackForPush() {
return serviceEvent -> serviceEvent.getSource().addUIInitListener(uiEvent -> {
uiEvent.getUI().getPushConfiguration().setFallbackTransport(Transport.WEBSOCKET);
});
} |
Thanks for the suggestion. |
I would say in cases where the client may not be able to establish a web socket connection at all. |
Description of the bug
Hi.
In production environment UI's responsiveness degrades significantly after some time.
WildFly 23+NGINX
Vaadin 23.4.0. +Springboot
Automatic PUSH via WEBSOCKET
Default session duration 30min is set on Wildfly.
Application has rather fast (~400ms) async UI updates (all of them called within
ui.access
)After initial start everything works just fine.
But after some time (hours or even days) each UI request (menu open, button pressing etc) starts to perform badly.
Vaadin Loading bar starts to blink and even stuck eventually.
Refreshing the UI's, Opening page in another browser don't work, so even NEW UIs perform this way.
Several thread dumps say that there is a blocking behavior on the same
lock
in some threads inAtmosphere
engine.See screenshots.
Probably there is deadlock somewhere.
Maybe such behavior occurs after session expiration or switching from websocket to long polling.
Actually I don't know how can I do further investigation.
In our Stage or Development environment everything works as expected.
Timed out sessions die, new sessions work correctly
Any response will be appreciated!
Expected behavior
.
Minimal reproducible example
Hard to reproduce, see screenshots
Versions
The text was updated successfully, but these errors were encountered: