Skip to content

Commit

Permalink
Perf: Relax locking contention for cache and cachekv (#353)
Browse files Browse the repository at this point in the history
## Describe your changes and provide context
**Problem:**
Currently when doing profiling, there are lot of locking contention
happening in the cachekv layer, this is because we are using mutex for
all read and write keys, but cachekv as a transient cache doesn't really
need such a strict locking mechanism. Having a high locking contention
would hurt the parallelize transaction execution performance a lot.

**Solution:**
- Replace BoundedCache with sync.map to have a per key based locking. We
don't really need to bound the cache size for transient cachekv store
since the cache will be destroyed after the block is finalized.
- Do not read through the cache. When call get, previously we will also
write the value to the cache which requires us to add a lock around the
whole read+write back operation, however, as a transient cache, this
wouldn't actually help benefit with cache hit too much, removing the
read through behavior would help reduce contention a lot
- Relax and narrow the locking scope for commitkvcache, this will still
be used as an inter block cache

## Testing performed to validate your change
Fully tested in loadtest env
  • Loading branch information
yzang2019 authored Dec 5, 2023
1 parent 1c5a372 commit 628c7e4
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 220 deletions.
33 changes: 20 additions & 13 deletions store/cache/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ type (

// the same CommitKVStoreCache may be accessed concurrently by multiple
// goroutines due to transaction parallelization
mtx sync.Mutex
mtx sync.RWMutex
}

// CommitKVStoreCacheManager maintains a mapping from a StoreKey to a
Expand Down Expand Up @@ -102,27 +102,34 @@ func (ckv *CommitKVStoreCache) CacheWrap(storeKey types.StoreKey) types.CacheWra
return cachekv.NewStore(ckv, storeKey, ckv.cacheKVSize)
}

// getFromCache queries the write-through cache for a value by key.
func (ckv *CommitKVStoreCache) getFromCache(key []byte) ([]byte, bool) {
ckv.mtx.RLock()
defer ckv.mtx.RUnlock()
return ckv.cache.Get(string(key))
}

// getAndWriteToCache queries the underlying CommitKVStore and writes the result
func (ckv *CommitKVStoreCache) getAndWriteToCache(key []byte) []byte {
ckv.mtx.RLock()
defer ckv.mtx.RUnlock()
value := ckv.CommitKVStore.Get(key)
ckv.cache.Add(string(key), value)
return value
}

// Get retrieves a value by key. It will first look in the write-through cache.
// If the value doesn't exist in the write-through cache, the query is delegated
// to the underlying CommitKVStore.
func (ckv *CommitKVStoreCache) Get(key []byte) []byte {
ckv.mtx.Lock()
defer ckv.mtx.Unlock()

types.AssertValidKey(key)

keyStr := string(key)
value, ok := ckv.cache.Get(keyStr)
if ok {
// cache hit
if value, ok := ckv.getFromCache(key); ok {
return value
}

// cache miss; write to cache
value = ckv.CommitKVStore.Get(key)
ckv.cache.Add(keyStr, value)

return value
// if not found in the cache, query the underlying CommitKVStore and init cache value
return ckv.getAndWriteToCache(key)
}

// Set inserts a key/value pair into both the write-through cache and the
Expand Down
45 changes: 0 additions & 45 deletions store/cachekv/search_benchmark_test.go

This file was deleted.

Loading

0 comments on commit 628c7e4

Please sign in to comment.