-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Downsampling is not working but compaction seem to be fine #780
Comments
Hey, Thanks for report! 👋 We are aware that this part related to downsampling might be confusing. Created ticket with some answers you might want and to track this further: #813 Let's move the discussion there. |
#813 is about delete data,but this issues is downsampling not work |
Hi, does the issue still persist? If it still does not work please try running command thanos bucket inspect and paste it here so we can see all the blocks in the storage |
I would recommend new freshly merged web page even: #1248 |
Hi,
we are running Thanos with google bucket as remote storage. Below are the configuration settings on the current setup:(min and max compaction : 2h)
In the compactor logs, we have seen compaction as well as downsampling is happening. But when we look at store and remote storage ULID(meta.json) file we don't anything got downsampled. Attached the logs for the same.
evel=info ts=2019-01-24T18:41:35.293282997Z caller=compact.go:816 msg="start sync of metas"
level=info ts=2019-01-24T18:41:35.832007529Z caller=compact.go:822 msg="start of GC"
level=info ts=2019-01-24T18:41:35.84597813Z caller=compact.go:207 msg="compaction iterations done"
level=info ts=2019-01-24T18:41:35.846158613Z caller=compact.go:214 msg="start first pass of downsampling"
level=info ts=2019-01-24T18:41:36.453676668Z caller=compact.go:220 msg="start second pass of downsampling"
level=info ts=2019-01-24T18:41:36.788466012Z caller=compact.go:225 msg="downsampling iterations done"
level=info ts=2019-01-24T18:41:36.788610995Z caller=retention.go:17 msg="start optional retention"
level=info ts=2019-01-24T18:41:37.175464351Z caller=retention.go:46 msg="optional retention apply done"
level=info ts=2019-01-24T18:46:35.290352093Z caller=compact.go:816 msg="start sync of metas"
level=debug ts=2019-01-24T18:46:35.649414618Z caller=compact.go:174 msg="download meta" block=01D20JNFWMN5A98DCK3C48HH1E
level=debug ts=2019-01-24T18:46:35.916952831Z caller=compact.go:192 msg="block is too fresh for now" block=01D20JNFWMN5A98DCK3C48HH1E
level=info ts=2019-01-24T18:46:35.91726417Z caller=compact.go:822 msg="start of GC"
level=info ts=2019-01-24T18:46:35.925582759Z caller=compact.go:207 msg="compaction iterations done"
level=info ts=2019-01-24T18:46:35.925696324Z caller=compact.go:214 msg="start first pass of downsampling"
level=info ts=2019-01-24T18:46:36.450720059Z caller=compact.go:220 msg="start second pass of downsampling"
level=info ts=2019-01-24T18:46:36.895809384Z caller=compact.go:225 msg="downsampling iterations done"
level=info ts=2019-01-24T18:46:36.895909826Z caller=retention.go:17 msg="start optional retention"
level=info ts=2019-01-24T18:46:37.354046696Z caller=retention.go:46 msg="optional retention apply done"
level=info ts=2019-01-24T18:51:35.299952556Z caller=compact.go:816 msg="start sync of metas"
level=debug ts=2019-01-24T18:51:36.17085564Z caller=compact.go:174 msg="download meta" block=01D20JNFWMN5A98DCK3C48HH1E
level=debug ts=2019-01-24T18:51:36.437762023Z caller=compact.go:192 msg="block is too fresh for now" block=01D20JNFWMN5A98DCK3C48HH1E
level=info ts=2019-01-24T18:51:36.437867922Z caller=compact.go:822 msg="start of GC"
Can you help us find the root cause for the same and let me know if you need any other config options or logs to confirm the issue?
The text was updated successfully, but these errors were encountered: