-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SuggestCompactRange() should execute compaction even if the data exists only on the bottommost level #1974
Comments
"we mark all the keys as dead through a compaction filter" what does it mean? |
We can issue last level => last level compaction if the files are marked as need compaction. Anyone is welcome to contribute. |
"we mark all the keys as dead through a compaction filter" what does it mean? I think this means drop index is fast because index entries are not deleted, and then the compaction filter will (or can, or should) drop those entries later and they can be found because the leading bytes in their key is the index-id. |
What Mark said. @siying - thanks. I opened this ticket so that we can track this work if we find it necessary. |
@igorcanadi I'm a little superised to see this issue, and checked the code rocks_engine.cpp#L500 you posted, but find unrelated code. It is because you posted the code from the master-branch. git master always changes.
bottom-most level should be compacted if the option is figured as kIfHaveCompactionFilter So, would you please repost the line triggered the problem in rocks_engine.cpp ? |
@wolfkdy the code you quote is from |
@igorcanadi I've found the related code
It seems suggestCompactRange will compact data exist only above the bottom-most level. |
Exactly. That's what this issue is about. |
This is related to https://jira.percona.com/browse/PSMDB-127. When we drop an index or a collection in MongoRocks, we mark all the keys as dead through a compaction filter. We don't issue any deletes. Compaction filters will slowly get rid of those dead indexes as compactions on the dead data gets executed.
However, we have a pathological case if a data for a dropped index only exists on the bottom-most level. In that case, compaction for the data will never be executed and the data won't be cleaned up.
We do call SuggestCompactRange() on the dropped data, but this also doesn't do anything if data is on the bottommost-level: https://github.com/mongodb-partners/mongo-rocks/blob/master/src/rocks_engine.cpp#L500
We should add an option to SuggestCompactRange() that would let us compact the data even if it exists only on the bottommost level.
The text was updated successfully, but these errors were encountered: