-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compact: Delete metrics in Object Storage #3529
Comments
Thanos Compact can apply retention rules based on the resolution. Please take a look at https://thanos.io/tip/components/compact.md/#enforcing-retention-of-data. Or, you can simply delete blocks where the max time is older than X. Third option is to use #3421 which lets you delete only certain metrics from a block. I hope that helps. Please close this if you have no more problems or please comment if you have some more sophisticated use-case. |
@GiedriusS by delete blocks older than X, do you mean storage.tsdb.max-block-duration ? |
@GiedriusS I have set the following arguments in compactor. But could still see data as old as 20 days still exist in S3. Can you let me know if my configuration is wrong.
Also could you please clarify what "blocks where the max time is older than X" means? Is it safe to delete blocks older than max time set on the store component? |
Yes, that option controls the retention period on Prometheus. On Thanos Compact you have the Let me clarify. The blocks encompass a time period - the period has a start and an end or, in other words, min/max times. If the max time of a block is older than What do you mean by "max time set on the store component |
Thanks for the details @GiedriusS . Also, I have set --delete-delay=0 . I believe this should delete the objects from S3 immediately. But I could find older objects still present in S3. Could you please clarify? |
Hi all, |
Any news on this? I have to manually delete entries on S3 in order to save space. |
Same here, I'm trying to figure out the recommended approach for deleting blocks on S3 that have exceeded the configured retention duration. Here are some findings from my research at the moment:
Some stuff that needs further investigation:
|
@tohjustin I can try to answer some questions you mentioned.
As for deletion, I think the end state is the same. Once the configured retention is reached, the objects are deleted.
Yes, it is different. Thanos block retention is calculated based on the max sample timestamp in the block.
It recreates a new downsampled block. For correctness, it depends on how you define it. But yes, the time to delete the objects will change because we recreate the block. Another thing I want to add is that Thanos compactor doesn't only do downsampling and data retention. It also performs compaction, which compacts several small blocks to a larger block. This makes long-time range queries more efficient. So I recommend you use the compactor if you are using S3. |
@yeya24 Really appreciate the quick response! I totally forgot all about the compaction feature itself 🤦 I was sort of leaning towards the S3 lifecycle policy approach (lesser overhead) until reading your answers, I guess using Thanos compactor is the way to go 👍 |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
I have deployed Thanos App Version: 0.16.0 with Prometheus 2.20.1 in EKS. I am using S3 for object storage.
I would like to know if any of the Thanos component can delete objects older than a particular date from S3. Else is it fine to delete objects by setting lifecycle rule in an S3 bucket? What is the recommended approach to delete Thanos data?
The text was updated successfully, but these errors were encountered: