-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance for large number of traces #247
Comments
@pavolloffay, thanks for creating this ticket. The current search implementation has some severe issues, returning full traces being one of the issues, along with
The issue of loading full traces for search results can be addressed with the current implementation of search. But, we've primarily been focused on addressing the issues mentioned in jaeger#166 and related tickets. As part of that work effort we intend to lighten the payload for search results. |
Also this #169 - Provide GraphQL query service |
Before moving to implement a new endpoint we should verify where the bottleneck actually is. What if the performance issue is in the trace graph. It should be easy to verify be sending a large number of "empty" traces to UI. |
Hi guys, has this issue solved? If I understand it rightly, the performance issue after loading 1500+ traces is because of the DOM generation on the Search Trace Page. I have made a paging button to split the DOM generation and I have used 10K traces to test it and could be rendered. Should I make a pull request or should I create a new issue branch? |
The issue is that if your search results contain 1000+ traces, the results themselves are only showing a limited set of information about each trace that can be represented by a handful of data elements, but in order to display them the UI nonetheless needs to load complete traces and pre-process them (like finding the root span to display name & latency). |
I have two ideas about this issue.
|
Requirement - what kind of business use case are you trying to solve?
At the moment showing a query for a large number of results e.g. limit=1500 fetches a lot of data to UI which makes it not responsible enough.
Q: Not sure if the issue is caused by a large amount of data or just displaying points in the top graph.
It's related to jaegertracing/jaeger#954 which asks for adding sorting logic on the server side. If the backed can fetch and sort more traces in memory than UI this can be a workaround.
Also somehow indirectly related to jaegertracing/jaeger#1051 and jaegertracing/jaeger#960
Proposal - what do you suggest to solve the problem or improve the existing situation?
Instead of returning all data to UI e.g. traces with tags, logs operation names the backend could return summary with only necessary data. Then when on clicking on a trace UI would query backed for full trace instance.
The text was updated successfully, but these errors were encountered: