Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compactor: Some of the blocks are not compacted, missed. #3721

Closed
qube222pl opened this issue Jan 14, 2021 · 6 comments
Closed

compactor: Some of the blocks are not compacted, missed. #3721

qube222pl opened this issue Jan 14, 2021 · 6 comments

Comments

@qube222pl
Copy link

qube222pl commented Jan 14, 2021

Thanos, Prometheus and Golang version used:
Thanos version 0.17.1, 0.17.2
Prometheus 2.20.1
Golang: not secified

Object Storage Provider: S3 MinIO

What happened:
Some of the blocks are not compacted, missed.

We are using thanos, and now we are trying to extend funcionality and add S3 storage ( exacly Minio)
We have data in prometheus , about 60 days and we want to move them to S3, So we startted tanos-sidecar with proper configuration and all of the the block are copied from prometheus to S3.
After upload to S3 any problems, as you can see on the graph.
image

The problem is when we start thanos-compact ,for some reason, some of the blocks are missed from compaction.It looks like on the screen.
image

During compaction when we watch on the the logs from compactor, we see that compactor take 5 raw blocks for compaction and when finish them take another 5 , but one is missed and left untouched. This untouched block is 56h, and is exacly the same as all of the others blocks.
Because is this 56h block only 5m downsamplig is possible.
Normaly we planed to keep blocks 1h only, and delete 5m and raw in short time.
No errors found in thanos-compact, thanos-sidecar, thanos-store logs, only warnings.

Maybe someone have some hits, clue, or faced with similar problem and can give some advice whats can be wrong.
I repeated the whole procedure 3 times, deleted all data from S3 upload and comact again but the result is always the same, missed the same blocks.

@mveroone
Copy link

I have the exact same issue.
Most settings are default. (retention, block size, version, storage)

@kakkoyun kakkoyun changed the title Some of the blocks are not compacted, missed. compactor: Some of the blocks are not compacted, missed. Feb 12, 2021
@kakkoyun
Copy link
Member

Hey @qube222pl, could you please share your compactor configuration?

@qube222pl
Copy link
Author

qube222pl commented Feb 12, 2021

Hi @kakkoyun
Our current configuration:
--http-address=0.0.0.0:10903 --data-dir /srv/data/compact
--objstore.config-file=/etc/prometheus/thanos/bucket.yml
--retention.resolution-raw=0d --retention.resolution-5m=0d --retention.resolution-1h=0d
--consistency-delay 0s --delete-delay=2h
--wait --wait-interval=5m

All fresh blocks from Prometheus , are compacted correctly and have 14 days , but the older have 12days + 2days.
We keep all data in Prometheus , so we can delete all data from S3 ( Minio ) , and start compaction from zero. We are using version 17.2, not tested on 18.0.
image

@mveroone
Copy link

Hey @kakkoyun I believe it happens with the default configuration.
here's mine :

compact --debug.name compact --log.level debug --http-address=0.0.0.0:10999 --block-sync-concurrency=5 --compact.concurrency=1 --http-grace-period 1s --wait --data-dir /thanos/compact --objstore.config-file /config/bucket.yml

Which created the same pattern as @qube222pl

@stale
Copy link

stale bot commented Apr 18, 2021

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Apr 18, 2021
@stale
Copy link

stale bot commented Jun 3, 2021

Closing for now as promised, let us know if you need this to be reopened! 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants