-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit queried chunks by bytes #3089
Conversation
ec3275b
to
0a9d125
Compare
pkg/store/bucket.go
Outdated
@@ -769,6 +774,10 @@ func blockSeries( | |||
return nil, nil, errors.Wrap(err, "preload chunks") | |||
} | |||
|
|||
if err := chunksSizeLimiter.Reserve(uint64(chunkr.stats.seriesFetchedSizeSum)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per your question:
Another question is is it a proper way to limit based on a chunk reader stats? https://github.com/thanos-io/thanos/pull/3089/files#diff-a75f50a9f5bf5b21a862e4e7c6bd1576R777
Or is it better to pass the limiter to the preload function?
That's a good one, it really depends on one thing: Do we want to return partial data if limit is exceeded or fail everything? (:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My take on that is that a user might be surprised when find out that the returned response was partial.
It might make sense to create a config for that and apply for both: limiting by a number of samples and by byte size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes, we have warnings for that. Everything with warning is assumed partial - then querier decide if it's error or not with partialResponseStrategy request option (:
dce3725
to
666479a
Compare
To the reviewer: |
666479a
to
b0f792e
Compare
b0f792e
to
ae47944
Compare
cmd/thanos/store.go
Outdated
@@ -68,6 +68,10 @@ func registerStore(app *extkingpin.App) { | |||
"Maximum amount of samples returned via a single Series call. The Series call fails if this limit is exceeded. 0 means no limit. NOTE: For efficiency the limit is internally implemented as 'chunks limit' considering each chunk contains 120 samples (it's the max number of samples each chunk can contain), so the actual number of samples might be lower, even though the maximum could be hit."). | |||
Default("0").Uint() | |||
|
|||
maxSampleSize := cmd.Flag("store.grpc.series-sample-size-limit", | |||
"Maximum size of samples returned via a single Series call. The Series call fails if this limit is exceeded. 0 means no limit."). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would clarify what's the unit here. Bytes?
) | ||
|
||
chunksSizeLimiter, err = chunksSizeLimiter.NewWithFailedCounterFrom(chunksLimiter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify why you need NewWithFailedCounterFrom()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pracucci thanks for reviewing this.
Chunks limiter and chunks size limiter should have common failedOnce
to avoid concurrent update of the failedCounter
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
more info on that: store
does not know at the moment of ChunksLimit
and ChunksSizeLimit
creation what metrics will be used. Metrics are the part of the BucketStore
. That is why store operates limiter factories and bucket creates new chunk size limiter with the sync.Once shared between limiters. Hope that helps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think NewWithFailedCounterFrom()
is a bit overengineered.
There's no problem updating a metric concurrently, but we may want to distinguish the reason why a query was dropped. An option may be adding a "reason" label to queriesDropped
and passing s.metrics.queriesDropped.WithLabelValues("<reason>")
to both factories.
Signed-off-by: Max Neverov <[email protected]>
0aec4b0
to
abdb25f
Compare
…ne; amend the new config description Signed-off-by: Max Neverov <[email protected]>
abdb25f
to
c24c072
Compare
Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward? This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
Hey @mneverov, shall we reopen this? Do you plan to work on it? |
hi @kakkoyun, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the extreme low review. Overall changes LGTM, but I left a couple of comments.
) | ||
|
||
chunksSizeLimiter, err = chunksSizeLimiter.NewWithFailedCounterFrom(chunksLimiter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think NewWithFailedCounterFrom()
is a bit overengineered.
There's no problem updating a metric concurrently, but we may want to distinguish the reason why a query was dropped. An option may be adding a "reason" label to queriesDropped
and passing s.metrics.queriesDropped.WithLabelValues("<reason>")
to both factories.
|
||
// NewBucketStoreWithOptions creates a new bucket backed store that implements the store API against | ||
// an object store bucket. It is optimized to work against high latency backends. | ||
func NewBucketStoreWithOptions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
// NewBucketStoreWithOptions creates a new bucket backed store that implements the store API against | ||
// an object store bucket. It is optimized to work against high latency backends. | ||
func NewBucketStoreWithOptions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @pracucci , |
Fixes: #2861
Signed-off-by: Max Neverov [email protected]
Changes
Add possibility to limit queried chunks by bytes via
store.grpc.series-sample-size-limit
flag.