Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

querier: Adjust deduplication for counters when querying for PromQL rates. #2548

Merged
merged 4 commits into from
May 18, 2020

Conversation

bwplotka
Copy link
Member

@bwplotka bwplotka commented Apr 30, 2020

cc @SuperQ

Chained over: #2528

Fixes: #2401

Signed-off-by: Bartlomiej Plotka [email protected]

@bwplotka
Copy link
Member Author

Works perfectly now ❤️

image

@bwplotka
Copy link
Member Author

bwplotka commented May 1, 2020

Let's not merge before #2528 merges.

pkg/query/iter.go Outdated Show resolved Hide resolved
@bwplotka
Copy link
Member Author

bwplotka commented May 4, 2020

Let's merge once #2528 get green mark.

func (it *counterDedupAdjustSeriesIterator) Next() bool {
if it.SeriesIterator.Next() {
_, v := it.SeriesIterator.At()
if it.lastV > v {
Copy link
Member

@GiedriusS GiedriusS May 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I was thinking but maybe a stupid question: shouldn't we set it.adjust back to 0 if we will jump back to a replica with a higher counter value again at some point during the iteration? Wouldn't that lead to higher counter values than usual without such a reset?

Copy link
Member Author

@bwplotka bwplotka May 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, you might be right. Adjust has to be cumulative, but per replica not globally, otherwise, we are adjusting true value... This might mean we need to

  1. Actually hook inside the dedup iterator and adjust per replica (fine)
  2. Or reset adjust as you propose (easier, but we lost the info that replica was behind).

Also if we do 1 it's a bit broken anyway as we know about WHEN & WHAT to adjust only if we are lucky that replica changes (: Still it's better than what we had before, plus better than any solution in the world for this (e.g dedup on ingest Cortex have)

cc @brancz @brian-brazil

Will think about it a bit more, thanks for spotting this!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this problem, but changed code a little bit, please review @GiedriusS and thanks for spotting this! I added test cases for this

@bwplotka bwplotka changed the base branch from issue2147-fix to master May 11, 2020 12:06
@bwplotka
Copy link
Member Author

PTAL @GiedriusS @brancz

@bwplotka
Copy link
Member Author

Let's merge on Monday. I plan to add benchmarks for this, plus consider to rewrite to heap approach, plus few other fixes, but let's do on later PR maybe (:

@bwplotka
Copy link
Member Author

Let's merge without squash (:

@bwplotka bwplotka merged commit d1ef032 into master May 18, 2020
@bwplotka
Copy link
Member Author

And of course I squashed ;p

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

querier: Rate over deduplicated counter from many replicas can lead to double reset account.
3 participants