-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logs UI GraphQL data source request takes about 40 seconds #58166
Comments
Pinging @elastic/logs-metrics-ui (Team:logs-metrics-ui) |
our plan to mitigate this is #53138 (the issue mentions the metrics UI but it applies to the logs UI as well) |
With #58553 having been merged and released in 7.6.1 can you confirm that the situation has improved? |
@weltenwort Sorry, been out of the country on vacation. |
@weltenwort I upgraded our production cluster tonight to 7.6.1 and I can verify that the datasource request takes about 3-4 seconds now. |
Thanks for confirming. 4 seconds is still not great, but not too bad considering the size of your cluster. There are other improvements in the works. I'll close this for now. Feel free to file a new one or re-open if you think this is in error. |
Kibana version: 7.6.0
Elasticsearch version: 7.6.0
Server OS version: Red Hat 7.7
Browser version: Chrome 78.0.3904.70
Browser OS version: Windows 7
Original install method: RPM from Yum repo
When accessing the logs ui page in version 7.6.0 the initial data source graphql request takes about 40 seconds to complete.
This is the payload of the request
Our cluster contains about 4.8TB of data (some of that is replicas). Here is a screen shot of our cluster specs from the monitoring page
I've attached a video of the error as well, as long as showing the reference between how long the discover page takes to load versus the logs ui. I had to zip it in order to attach it.
logs-ui-error.zip
The logs ui has always been slower than discovery, but it's wasn't nearly this slow in 7.5.1
I really hope that the speed of the logs ui will improve as it's much easier to view logs in this traditional format, and log aggregation is our main use case for elasticsearch.
The text was updated successfully, but these errors were encountered: