Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vertical Block Sharding Proposal #4191

Open
wants to merge 8 commits into
base: cap-size-com
Choose a base branch
from

Conversation

Biswajitghosh98
Copy link
Contributor

This is the first draft of the vertical block sharding proposal. Open for reviews :)

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks actually guide good, especially when you elaborated it to me! (:

Some ideas during our 1:1

  • Add some diagram to make it easier to understand (similar to what you draw me)
  • Elaborate test cases we discussed offline
    • How many resources/time would be needed to perform such multi shard compaction? Is there anything blocking to make streaming?
  • plus minor suggestions below.

docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved
The current method ignores a block for further compaction once its index reaches a limit. We'd let that stay as it is, with a caveat that whenever a blocks index size hits the limit, we'd find the first 0-valued hash_number (let's say i), then further sub-divide (shard) the existing block with the corresponding `hash_function_i`, and then let the compaction process run as usual.
Here also, we'd allow grouper and planner to compact (horizontally and vertically) only those blocks together whose `hash_number` external labels match completely.
The advantage of this method is, that it would allow us to help grow the size of the block in terms of time difference (`maxt - mint`) exponentially, so we can make an estimated guess that n would not exceed 3 or 4.
The negative aspect of this approach would be, as blocks grow larger, it would become difficult to find a subsequent block to compact it with.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add hash function to metada

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, makes sense to me.

docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved
docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved
docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved
docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved

For this approach, we'd need to add two extra external label `hash_number`, initialized to 0, `shard_level` initialized to 0, denoting the number of times it has been sharded during the compaction process, and a set of 16 `hash_function`s. The reason for chosing 16 is because 2 raised to the power 16 is 65536, which is a logical upper bound for the number of shards a user might want to have.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are hash_level and shard_level the same thing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hitanshu-mehta Yes. Apologies if it was unclear from my end :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries :) I think we should use the same term everywhere in the proposal because it can lead up to confusion sometimes. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hitanshu-mehta Yes, definitely. I'll change it and add more pictures ASAP

* If the compacted block is valid, then continue the usual compaction process until it results into an invalid block.
* If the compacted block is invalid, then add the no compact filter to all the participating blocks.

We can keep growing the binary tree by recursively using the same logic. The reason for calling this adaptive is because at any point of time, the total number of shards is equal to the number of nodes on the tree (say x). If any of the leaves overloads (a block in that particular shards hits the limit), that particular leaf would split into 2, effectively increasing the number of shards to x+2, hence providing just the right amount of shards to get the job done.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the total number of shards should be equal to the number of leaf nodes and every vertical sharding will increase the number of shards to x + 1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hitanshu-mehta Say we have 10 blocks in shard (0,0), and we're compacting 3 blocks out of those together. Suppose the resultant block hits the limit, then we'll shard the 3 participating blocks, leaving 7 blocks at shard (0,0), 3 at (1,0) and 3 at (1,1), increasing the number of shards from 1 to 3

Copy link
Contributor

@hitanshu-mehta hitanshu-mehta May 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for clearing things up! I was calculating shards the different(maybe wrong) way. To give a rough idea how I calculated,

block_number(hash_number, hash_level)

       1(0,0)             2(0,0)            3(0,0)         *4(0,0)    Compaction of blocks 1(0,0), 2(0,0) and 3(0,0) hits the limit
     /        \         /        \         /       \   
  1(0,1)   *1(1,1)   2(0,1)   *2(1,1)     3(0,1)  *3(1,1)             Compaction of blocks 1(0,1), 2(0,1) and 3(0,1) hits the limit
 /      \           /    \              /    \
*1(0,2) *1(1,2)   *2(0,2) *2(1,2)     *3(0,2) *3(1,2)

The total number of stared(*) blocks is the number of shards.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hitanshu-mehta Oh, I see. But this is different from what I had in mind. The proposal would need more pictures and re-wording to make it more clear for everyone, I suppose :)

docs/proposals/202011_vertical_block_sharding.md Outdated Show resolved Hide resolved
@stale
Copy link

stale bot commented Jul 16, 2021

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Jul 16, 2021
@stale stale bot closed this Jul 24, 2021
@yeya24 yeya24 reopened this Jul 24, 2021
@stale stale bot removed the stale label Jul 24, 2021
@stale
Copy link

stale bot commented Sep 22, 2021

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Sep 22, 2021
@stale stale bot closed this Oct 11, 2021
@yeya24
Copy link
Contributor

yeya24 commented May 28, 2022

Can we revisit this proposal?

@bwplotka bwplotka reopened this Jun 23, 2022
@stale stale bot removed the stale label Jun 23, 2022
@bwplotka
Copy link
Member

Yes, it still needs some work.

@bwplotka bwplotka added the dont-go-stale Label for important issues which tells the stalebot not to close them label Jun 23, 2022

* Special metadata in meta.json indicating a shard.
* We'll be using a special metadata in meta.json indicating a shard.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be another external label?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think hash_number and shard_level are external labels as mentioned below.


## Alternatives
We'll be using `hash_level` and `hash_number` to group and plan blocks to compact together. The way it'd work is, we'll allow grouper to group only those blocks together whose `hash_level`s are same. Also, we'd allow planner to compact only those blocks together that share the same `hash_number`. So, with this, the compactor would run as it is unless a compacted block has hit the limit for the first time.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean blocks can get merged even if they're not part of the same stream (have the same external labels)?

Copy link
Contributor

@yeya24 yeya24 Jun 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think it is not mentioned explicitly but all external labels should be the same during the grouping.

* Shard number stored in external label.
![vbs-image1](../img/vertical_block_sharding_image1.png)

We're allowing a compacted block to hit the limit for a maximum of 16 times (if not specified otherwise by the user), and at the 16th level, if/when it further compacts and hits the limit, we're marking it for no more further compactions. The algorithm for sharding would come into play once a compacted block is declared invalid, or its size hits the limit. If the compacted block is still valid, but its index size is too large (or the number of series it holds exceeds the limit decided by the user), we'll look at the current `hash_level`(say `hl`) of the compacted block (if valid), or the blocks participating in the compaction process (if the compacted block is invalid), then we'll either shard the compacted block or the participating blocks (depending on the situation), `using hash_function[hl+1]`, and hence set the `hash_level` of the newly created blocks to `hl+1`, and then upload them for further compaction processes, if `hl` is not equal to `max_shard_level_limit`, which is 16 by default. In case `hl` is equal to `max_shard_level_limit`, we will take one of the 2 decisions depending on whether the compacted block is valid or invalid.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean for an index size to be too large? Will this be configurable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think it should be configurable. We have supported planning considering index size from #3410 and the index limit is configurable.

@yeya24
Copy link
Contributor

yeya24 commented Jun 24, 2022

Can you describe more about the cuckoo filter part? How can we use it to improve the query performance at store gateway? Is it related to this proposal?

@@ -87,17 +87,49 @@ index size).

## Proposal

**We propose to vertically shard all blocks which goes beyond the specified index size limit during the compaction process.**
**We propose to vertically shard blocks when it reaches the cap limit in terms of number of series or size of index file, and then adaptively decide the number of shards depending on the users requirement**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still want to wait for the block reaching the cap limit to do sharding? With vertical sharding, I think we want to shard the blocks eagerly to get better query performance.


#### Decision 1: Using size of index file vs number of metrics in a block as an indicator for sharding

The pros and cons for selecting one design choice over the other is yet to be discovered, and would be more clear after implementation and testing.
Copy link
Contributor

@fpetkovski fpetkovski Jun 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how the proposed approach compares to the current workaround of setting a maximum index size per block.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dont-go-stale Label for important issues which tells the stalebot not to close them size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants