Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support pro-actively erasing obsolete block cache entries #12694

Closed
wants to merge 8 commits into from

Conversation

pdillinger
Copy link
Contributor

@pdillinger pdillinger commented May 23, 2024

Summary: Currently, when files become obsolete, the block cache entries associated with them just age out naturally. With pure LRU, this is not too bad, as once you "use" enough cache entries to (re-)fill the cache, you are guranteed to have purged the obsolete entries. However, HyperClockCache is a counting clock cache with a somewhat longer memory, so could be more negatively impacted by previously-hot cache entries becoming obsolete, and taking longer to age out than newer single-hit entries.

Part of the reason we still have this natural aging-out is that there's almost no connection between block cache entries and the file they are associated with. Everything is hashed into the same pool(s) of entries with nothing like a secondary index based on file. Keeping track of such an index could be expensive.

This change adds a new, mutable CF option uncache_aggressiveness for erasing obsolete block cache entries. The process can be speculative, lossy, or unproductive because not all potential block cache entries associated with files will be resident in memory, and attempting to remove them all could be wasted CPU time. Rather than a simple on/off switch, uncache_aggressiveness basically tells RocksDB how much CPU you're willing to burn trying to purge obsolete block cache entries. When such efforts are not sufficiently productive for a file, we stop and move on.

The option is in ColumnFamilyOptions so that it is dynamically changeable for already-open files, and customizeable by CF.

Note that this block cache removal happens as part of the process of purging obsolete files, which is often in a background thread (depending on background_purge_on_iterator_cleanup and avoid_unnecessary_blocking_io options) rather than along CPU critical paths.

Notable auxiliary code details:

  • Possibly fixing some issues with trivial moves with only_delete_metadata: unnecessary TableCache::Evict in that case and missing from the ObsoleteFileInfo move operator. (Not able to reproduce an current failure.)
  • Remove suspicious TableCache::Erase() from VersionSet::AddObsoleteBlobFile() (TODO follow-up item)

Marked EXPERIMENTAL until more thorough validation is complete.

Direct stats of this functionality are omitted because they could be misleading. Block cache hit rate is a better indicator of benefit, and CPU profiling a better indicator of cost.

Test Plan:

  • Unit tests added, including refactoring an existing test to make better use of parameterized tests.
  • Added to crash test.
  • Performance, sample command:
for I in `seq 1 10`; do for UA in 300; do for CT in lru_cache fixed_hyper_clock_cache auto_hyper_clock_cache; do rm -rf /dev/shm/test3; TEST_TMPDIR=/dev/shm/test3 /usr/bin/time ./db_bench -benchmarks=readwhilewriting -num=13000000 -read_random_exp_range=6 -write_buffer_size=10000000 -bloom_bits=10 -cache_type=$CT -cache_size=390000000 -cache_index_and_filter_blocks=1 -disable_wal=1 -duration=60 -statistics -uncache_aggressiveness=$UA 2>&1 | grep -E 'micros/op|rocksdb.block.cache.data.(hit|miss)|rocksdb.number.keys.(read|written)|maxresident' | awk '/rocksdb.block.cache.data.miss/ { miss = $4 } /rocksdb.block.cache.data.hit/ { hit = $4 } { print } END { print "hit rate = " ((hit * 1.0) / (miss + hit)) }' | tee -a results-$CT-$UA; done; done; done

Averaging 10 runs each case, block cache data block hit rates

lru_cache
UA=0   -> hit rate = 0.327, ops/s = 87668, user CPU sec = 139.0
UA=300 -> hit rate = 0.336, ops/s = 87960, user CPU sec = 139.0

fixed_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 100069, user CPU sec = 139.9
UA=300 -> hit rate = 0.343, ops/s = 100104, user CPU sec = 140.2

auto_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 97580, user CPU sec = 140.5
UA=300 -> hit rate = 0.345, ops/s = 97972, user CPU sec = 139.8

Conclusion: up to roughly 1 percentage point of improved block cache hit rate, likely leading to overall improved efficiency (because the foreground CPU cost of cache misses likely outweighs the background CPU cost of erasure, let alone I/O savings).

Summary: Currently, when files become obsolete, the block cache entries
associated with them just age out naturally. With pure LRU, this is not
too bad, as once you "use" enough cache entries to (re-)fill the cache,
you are guranteed to have purged the obsolete entries. However,
HyperClockCache is a counting clock cache with a somewhat longer memory,
so could be more negatively impacted by previously-hot cache entries
becoming obsolete, and taking longer to age out than newer single-hit
entries.

Part of the reason we still have this natural aging-out is that there's
almost no connection between block cache entries and the file they are
associated with. Everything is hashed into the same pool(s) of entries
with nothing like a secondary index based on file. Keeping track of such
an index could be expensive.

This change adds a new, mutable CF option `uncache_aggressiveness` for
erasing obsolete block cache entries. The process can be speculative,
lossy, or unproductive because not all potential block cache entries
associated with files will be resident in memory, and attempting to
remove them all could be wasted CPU time. Rather than a simple on/off
switch, `uncache_aggressiveness` basically tells RocksDB how much CPU
you're willing to burn trying to purge obsolete block cache entries.
When such efforts are not sufficiently productive for a file, we stop
and move on.

The option is in ColumnFamilyOptions so that it is dynamically
changeable for already-open files, and customizeable by CF.

Note that this block cache removal happens as part of the process of
purging obsolete files, which happens in a background thread rather than
along any CPU critical paths.

Notable auxiliary code details:
* Possibly fixing some issues with trivial moves with
  `only_delete_metadata`: unnecessary TableCache::Evict in that case and
  missing from the ObsoleteFileInfo move operator.
* Remove suspicious TableCache::Erase() from
  VersionSet::AddObsoleteBlobFile() (TODO follow-up item)

Marked EXPERIMENTAL until more thorough validation is complete.

Test Plan:
Added to crash test
TODO: unit test
TODO: Should I add stats? Which ones (how detailed)?

Performance, sample command:
```
for I in `seq 1 10`; do for UA in 300; do for CT in lru_cache fixed_hyper_clock_cache auto_hyper_clock_cache; do rm -rf /dev/shm/test3; TEST_TMPDIR=/dev/shm/test3 /usr/bin/time ./db_bench -benchmarks=readwhilewriting -num=13000000 -read_random_exp_range=6 -write_buffer_size=10000000 -bloom_bits=10 -cache_type=$CT -cache_size=390000000 -cache_index_and_filter_blocks=1 -disable_wal=1 -duration=60 -statistics -uncache_aggressiveness=$UA 2>&1 | grep -E 'micros/op|rocksdb.block.cache.data.(hit|miss)|rocksdb.number.keys.(read|written)|maxresident' | awk '/rocksdb.block.cache.data.miss/ { miss = $4 } /rocksdb.block.cache.data.hit/ { hit = $4 } { print } END { print "hit rate = " ((hit * 1.0) / (miss + hit)) }' | tee -a results-$CT-$UA; done; done; done
```

Averaging 10 runs each case, block cache data block hit rates

```
lru_cache
UA=0   -> hit rate = 0.327, ops/s = 87668, user CPU sec = 139.0
UA=300 -> hit rate = 0.336, ops/s = 87960, user CPU sec = 139.0

fixed_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 100069, user CPU sec = 139.9
UA=300 -> hit rate = 0.343, ops/s = 100104, user CPU sec = 140.2

auto_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 97580, user CPU sec = 140.5
UA=300 -> hit rate = 0.345, ops/s = 97972, user CPU sec = 139.8
```

Conclusion: up to roughly 1 percentage point of improved block cache hit
rate, likely leading to overall improved efficiency (because the
foreground CPU cost of cache misses likely outweighs the background CPU
cost of erasure, let alone I/O savings).
@pdillinger pdillinger requested a review from ajkr May 29, 2024 21:01
@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@ajkr ajkr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

purging obsolete files, which happens in a background thread rather than along any CPU critical paths.

I think it can also happen in foreground, but still it seems fine. See background_purge_on_iterator_cleanup and avoid_unnecessary_blocking_io.

metadata = rhs.metadata;
rhs.metadata = nullptr;
path = std::move(rhs.path);
only_delete_metadata = rhs.only_delete_metadata;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this bug have caused us to delete a file that's part of the DB? There are some issues about that that we've never solved. For example, #10357.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe in an older revision. I did some code inspection, assertions with all unit tests and blackbox_crash_test to check for the possibility and didn't turn up anything. Specifically assert(rhs.only_delete_metadata == false); here in the move operator.

So I'm not planning to split it off in another PR nor release note this aspect.

public:
UncacheAggressivenessAdvisor(uint32_t uncache_aggressiveness) {
assert(uncache_aggressiveness > 0);
allowance_ = std::min(uncache_aggressiveness, uint32_t{3});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean std::max? For example, it feels to me like increasing the setting (above 3) should tolerate more non-useful blocks at the start (when useful_ is still zero).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant it this way. I'll add more comments and a unit test for UncacheAggressivenessAdvisor.

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pdillinger merged this pull request in b34cef5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants