-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed --store.grpc.series-sample-limit #2858
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks perfect to me mod minor comments. Definitely good to merge, but would be nice to add TODO to maybe convert this to chunk bytes
limiting. It's very normal in current TSDB state for chunks to be super small ):
Thanks!
@@ -722,12 +721,16 @@ func blockSeries( | |||
s.refs = append(s.refs, meta.Ref) | |||
} | |||
if len(s.chks) > 0 { | |||
if err := chunksLimiter.Reserve(uint64(len(s.chks))); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could potentially limit chunk bytes instead TBH, but this is good for a start
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or both. In Cortex we would need by count :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened an issue: #2861
if reserved := atomic.AddUint64(&l.reserved, num); reserved > l.limit { | ||
// We need to protect from the counter being incremented twice due to concurrency | ||
// while calling Reserve(). | ||
l.failedOnce.Do(l.failedCounter.Inc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Thanks @bwplotka for your feedback. I should have addressed the comments and fixed the doc. |
745d1de
to
61d8981
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, super nice change! Extremely useful bugfix for users! 💪
Looks like my manual rebase through Github UI (there were conflicts) was not successful.. 😢 I tried ;p |
Signed-off-by: Marco Pracucci <[email protected]>
Signed-off-by: Marco Pracucci <[email protected]>
8982200
to
bc3fcd2
Compare
Thanks! |
Hi @pracucci, And I'm wondering that do we have some metrics about loaded/fetched samples per-query? So we can determine a query is rejected by this series limit or other reason. |
Changes
In this PR I've changed the
--store.grpc.series-sample-limit
implementation to apply the limit to the sum of "samples" fetched across all queried blocks instead of individually apply the limit to each block.Few notes:
ChunksLimiterFactory
to not pass a static limit toNewBucketStore
because I would like to use it in Cortex too and in Cortex we have dynamic limits (limits can be reloaded live, without restarting the process)Fixes #2845.
Verification
Existing + new unit tests.