-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track histogram of transport handling times #80581
Merged
DaveCTurner
merged 12 commits into
elastic:master
from
DaveCTurner:2021-11-05-blocked-time-histogram
Nov 29, 2021
Merged
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
e48ee01
Track histogram of transport handling times
DaveCTurner b7b2ec8
Merge branch 'master' into 2021-11-05-blocked-time-histogram
DaveCTurner 0d29128
Use raw times for better granularity
DaveCTurner a1ce073
Finer-grained buckets
DaveCTurner b527e82
Bucket count & bounds are known, no need to send over the wire
DaveCTurner c775e4e
Separate inbound & outbound histograms
DaveCTurner 6a967b9
Fix docs
DaveCTurner cd415c1
Merge branch 'master' into 2021-11-05-blocked-time-histogram
DaveCTurner 0e0ae90
Merge branch 'master' into 2021-11-05-blocked-time-histogram
DaveCTurner 8419a2d
Less magic
DaveCTurner 3ef1eff
Merge branch 'master' into 2021-11-05-blocked-time-histogram
DaveCTurner 90d2ef9
Merge branch 'master' of github.com:elastic/elasticsearch into 2021-1…
DaveCTurner File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
65 changes: 65 additions & 0 deletions
65
server/src/main/java/org/elasticsearch/common/network/HandlingTimeTracker.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
/* | ||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one | ||
* or more contributor license agreements. Licensed under the Elastic License | ||
* 2.0 and the Server Side Public License, v 1; you may not use this file except | ||
* in compliance with, at your election, the Elastic License 2.0 or the Server | ||
* Side Public License, v 1. | ||
*/ | ||
|
||
package org.elasticsearch.common.network; | ||
|
||
import java.util.concurrent.atomic.LongAdder; | ||
|
||
/** | ||
* Tracks how long message handling takes on a transport thread as a histogram with fixed buckets. | ||
*/ | ||
public class HandlingTimeTracker { | ||
|
||
public static int[] getBucketUpperBounds() { | ||
int[] bounds = new int[17]; | ||
for (int i = 0; i < bounds.length; i++) { | ||
bounds[i] = 1 << i; | ||
} | ||
return bounds; | ||
} | ||
|
||
private static int getBucket(long handlingTimeMillis) { | ||
if (handlingTimeMillis <= 0) { | ||
return 0; | ||
} else if (LAST_BUCKET_LOWER_BOUND <= handlingTimeMillis) { | ||
return BUCKET_COUNT - 1; | ||
} else { | ||
return Long.SIZE - Long.numberOfLeadingZeros(handlingTimeMillis); | ||
} | ||
} | ||
|
||
public static final int BUCKET_COUNT = getBucketUpperBounds().length + 1; | ||
|
||
private static final long LAST_BUCKET_LOWER_BOUND = getBucketUpperBounds()[BUCKET_COUNT - 2]; | ||
|
||
private final LongAdder[] buckets; | ||
|
||
public HandlingTimeTracker() { | ||
buckets = new LongAdder[BUCKET_COUNT]; | ||
for (int i = 0; i < BUCKET_COUNT; i++) { | ||
buckets[i] = new LongAdder(); | ||
} | ||
} | ||
|
||
public void addHandlingTime(long handlingTimeMillis) { | ||
buckets[getBucket(handlingTimeMillis)].increment(); | ||
} | ||
|
||
/** | ||
* @return An array of frequencies of handling times in buckets with upper bounds as returned by {@link #getBucketUpperBounds()}, plus | ||
* an extra bucket for handling times longer than the longest upper bound. | ||
*/ | ||
public long[] getHistogram() { | ||
final long[] histogram = new long[BUCKET_COUNT]; | ||
for (int i = 0; i < BUCKET_COUNT; i++) { | ||
histogram[i] = buckets[i].longValue(); | ||
} | ||
return histogram; | ||
} | ||
|
||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be spurious since it counts time spent waiting for the channel to become writeable (cf #77838). Should we track it separately from the inbound time tracking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's going to be possible to differentiate between waiting for writable and time spent actually writing really (I know I promised differently a while back sorry about that). Definitely not easily. If you think about it, you could run into a non-writable channel and start counting from there. But then once it becomes writable again, you might not be the first in line to get your bytes flushed because some other write comes before you and may turn the channel not-writable again. And then that other write takes CPU for TLS and such, making it really hard to cleanly define what time was spent waiting.
I really like the current number for the simple fact that it gives an indication for the latency overall on a transport thread (while the inbound handler check indicates individual per message slowness). I don't really see how we could cleanly identify the fact that a channel isn't writable for an extended period of time and pinpoint that on the network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah acking that we can't easily compute exactly what we want, but I still worry that we're putting two different numbers into the one histogram. Should we have two histograms, one for inbound things (which is purely handling-time) and one for outbound things (which potentially includes channel-blocked time)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea that would be optimal actually. Can we do that here? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 done in c775e4e.
Another related observation is that we're not tracking outbound time for HTTP responses AFAICT. Should we? Can we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could. I guess it would be nice to have but probably not all that valuable. The distribution on the outbound side for HTTP will be the same as that for sending transport messages. For REST outbound I'm almost thinking I'd rather like the distribution of serialization times there, because those have historically been the problem and it would give us more information when going on the hunt for why we have a bad distribution of outbound handling times.
IMO that'd be a worthwhile follow-up.