-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Profiling] Query in parallel only if beneficial #103061
[Profiling] Query in parallel only if beneficial #103061
Conversation
With this commit we check index allocation before we do key-value lookups. To reduce latency, key-value lookups are done in parallel for multiple slices of data. However, on nodes with spinning disks, parallel accesses are harmful. Therefore, we check whether any index is allocated either to the warm or cold tier (which are usually on spinning disks) and disable parallel key-value lookups. This has improved latency on the warm tier by about 10% in our experiments.
Pinging @elastic/obs-knowledge-team (Team:obs-knowledge) |
Hi @danielmitterdorfer, I've created a changelog YAML for you. |
@elasticmachine merge upstream |
ClusterState clusterState = clusterService.state(); | ||
List<Index> indices = resolver.resolve(clusterState, "profiling-stacktraces", responseBuilder.getStart(), responseBuilder.getEnd()); | ||
// Avoid parallelism if there is potential we are on spinning disks (frozen tier uses searchable snapshots) | ||
int sliceCount = IndexAllocation.isAnyOnWarmOrColdTier(clusterState, indices) ? 1 : desiredSlices; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a question. Did you test with 2 slices on the warm tier as well? If yes, what was the ~ impact?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just dug through my notes. I believe I did such a test but the superior alternative in all cases was just using a single slice (apparently I did not even bother to write down the results for anything except the default value of 16
and a single slice...).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
💔 Backport failedThe backport operation could not be completed due to the following error:
You can use sqren/backport to manually backport by running |
With this commit we check index allocation before we do key-value lookups. To reduce latency, key-value lookups are done in parallel for multiple slices of data. However, on nodes with spinning disks, parallel accesses are harmful. Therefore, we check whether any index is allocated either to the warm or cold tier (which are usually on spinning disks) and disable parallel key-value lookups. This has improved latency on the warm tier by about 10% in our experiments.
I'm manually backporting this in #103144. |
With this commit we check index allocation before we do key-value lookups. To reduce latency, key-value lookups are done in parallel for multiple slices of data. However, on nodes with spinning disks, parallel accesses are harmful. Therefore, we check whether any index is allocated either to the warm or cold tier (which are usually on spinning disks) and disable parallel key-value lookups. This has improved latency on the warm tier by about 10% in our experiments. Co-authored-by: Elastic Machine <[email protected]>
In order to take advantage of inherent parallelism of modern SSDs, we slice keys and issue multiple mgets concurrently. In elastic#103061 we have introduced an additional heuristic to disable that behavior on the warm and cold tier which usually use spinning disks. We have unintentionally also disabled the behavior on content nodes, i.e. on any clusters which do not use data tiers. With this commit we explicitly exclude content nodes from the heuristic so they can benefit from speedups due to concurrent mgets. Relates elastic#103061
In order to take advantage of inherent parallelism of modern SSDs, we slice keys and issue multiple mgets concurrently. In #103061 we have introduced an additional heuristic to disable that behavior on the warm and cold tier which usually use spinning disks. We have unintentionally also disabled the behavior on content nodes, i.e. on any clusters which do not use data tiers. With this commit we explicitly exclude content nodes from the heuristic so they can benefit from speedups due to concurrent mgets. Relates #103061
In order to take advantage of inherent parallelism of modern SSDs, we slice keys and issue multiple mgets concurrently. In elastic#103061 we have introduced an additional heuristic to disable that behavior on the warm and cold tier which usually use spinning disks. We have unintentionally also disabled the behavior on content nodes, i.e. on any clusters which do not use data tiers. With this commit we explicitly exclude content nodes from the heuristic so they can benefit from speedups due to concurrent mgets. Relates elastic#103061
In order to take advantage of inherent parallelism of modern SSDs, we slice keys and issue multiple mgets concurrently. In #103061 we have introduced an additional heuristic to disable that behavior on the warm and cold tier which usually use spinning disks. We have unintentionally also disabled the behavior on content nodes, i.e. on any clusters which do not use data tiers. With this commit we explicitly exclude content nodes from the heuristic so they can benefit from speedups due to concurrent mgets. Relates #103061
With this commit we check index allocation before we do key-value lookups. To reduce latency, key-value lookups are done in parallel for multiple slices of data. However, on nodes with spinning disks, parallel accesses are harmful. Therefore, we check whether any index is allocated either to the warm or cold tier (which are usually on spinning disks) and disable parallel key-value lookups. This has improved latency on the warm tier by about 10% in our experiments.