-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compactor: Consider upper bound of block size #2340
Comments
I'm probably missing some context. Is your plan to just split the result of compaction to multiple blocks or to shard blocks before running the compaction? I guess the first one, but I would prefer to avoid any misunderstanding from my side. |
I think actually something between. Do that during compaction (: |
I am already exploring iterator interface on Prometheus side which could help with this |
Ignore this comment. Was a misunderstand from my side. This is blocks sharding agnostic. |
Hello 👋 Looks like there was no activity on this issue for last 30 days. |
Still relevant.
…On Wed, 29 Apr 2020 at 11:12, stale[bot] ***@***.***> wrote:
Hello 👋 Looks like there was no activity on this issue for last 30 days.
*Do you mind updating us on the status?* Is this still reproducible or
needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity for next week, this issue will be closed (we
can always reopen an issue if we need!). Alternatively, use remind command
<https://probot.github.io/apps/reminders/> if you wish to be reminded at
some point in future.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2340 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVA3O5EBKGCCYR7CT4FAS3RO74QNANCNFSM4LWMW2HA>
.
|
Hello 👋 Looks like there was no activity on this issue for last 30 days. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
I think this is still relevant. |
Yes, I just had a 480GB compaction, which means I need 1T of local disk to handle these events. It would be nice to be able to set an upper bound on TSDB block sizes. |
still relevant |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
@Biswajitghosh98 is on it (: |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Hi 👋
With vertical/offline deduplication we have a risk of oversized blocks. In the current design, this comes with the tradeoff of more difficult debug as well as at some point slower lookup.
From my experience chunk file size does not matter, but the index size matter, so we might want to split blocks at a certain size. Question is.. what size (: (most likely configurable). Thoughts? @brancz @SuperQ @pracucci @pstibrany
The text was updated successfully, but these errors were encountered: