Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache,db: de-dup concurrent attempts to read the same block #4157

Merged
merged 1 commit into from
Dec 11, 2024

Conversation

sumeerbhola
Copy link
Collaborator

@sumeerbhola sumeerbhola commented Nov 12, 2024

Concurrent reads of the same block have been observed to cause very high
memory usage, and cause significant CPU usage for allocations/deallocations.
We now coordinate across multiple concurrent attempts to read the same
block via a readEntry, which makes the readers take turns until one
succeeds.

The readEntries are embedded in a map that is part of a readShard, where
there is a readShard for each cache.Shard. See the long comment in the
readShard declaration for motivation.

Callers interact with this new behavior via Cache.GetWithReadHandle,
which is only for callers that intend to do a read and then populate
the cache. If this method returns a ReadHandle, the caller has permission
to do a read. See the ReadHandle comment for details of the contract.

Fixes #4138

@sumeerbhola sumeerbhola requested a review from a team as a code owner November 12, 2024 00:54
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lacks tests -- I would like an opinion on the approach and interfaces before adding those.

Reviewable status: 0 of 8 files reviewed, all discussions resolved (waiting on @RaduBerinde)

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 8 files reviewed, 1 unresolved discussion (waiting on @sumeerbhola)


sstable/reader.go line 525 at r1 (raw file):

		var err error
		var errorDuration time.Duration
		ch, errorDuration, err = crh.WaitForReadPermissionOrHandle(ctx)

Have you considered a GetOrPopulate API where we pass a func that does the actual read (or we can define a simple interface that *Reader implements)? Then the new logic would be internal to GetOrPopulate. All the caller cares is that at the end we either get a BufferHandle or an error. We can keep the simple Get for the buffer pool case.

@sumeerbhola sumeerbhola force-pushed the cache_parallel branch 2 times, most recently from 40fbe2c to 23f4b7f Compare November 13, 2024 18:44
Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 8 files reviewed, 1 unresolved discussion (waiting on @RaduBerinde)


sstable/reader.go line 525 at r1 (raw file):

Previously, RaduBerinde wrote…

Have you considered a GetOrPopulate API where we pass a func that does the actual read (or we can define a simple interface that *Reader implements)? Then the new logic would be internal to GetOrPopulate. All the caller cares is that at the end we either get a BufferHandle or an error. We can keep the simple Get for the buffer pool case.

I didn't seriously consider it since there is a lot of shared code in readBlockInternal but there is also enough branching for the cache.ReadHandle case. We would want to write one func that does the actual read in both cases. So it would need to have the branching too. It would avoid the need for the following blocks which precede some return statements:

if crh.Valid() {
			crh.SetReadError(err)
}

I've moved those into a defer call here, so that by itself is not a big motivator for GetOrPopulate.

I've now done an implementation of readBlockInternal (readBlockInternal2), using (an unimplemented) GetOrPopulate in sumeerbhola@bac895b. PTAL. By itself, it isn't simpler IMO. But it does hide ReadHandle so it is perhaps compelling enough. Thoughts?

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 8 files reviewed, 3 unresolved discussions (waiting on @jbowens and @sumeerbhola)


sstable/reader.go line 525 at r1 (raw file):

Previously, sumeerbhola wrote…

I didn't seriously consider it since there is a lot of shared code in readBlockInternal but there is also enough branching for the cache.ReadHandle case. We would want to write one func that does the actual read in both cases. So it would need to have the branching too. It would avoid the need for the following blocks which precede some return statements:

if crh.Valid() {
			crh.SetReadError(err)
}

I've moved those into a defer call here, so that by itself is not a big motivator for GetOrPopulate.

I've now done an implementation of readBlockInternal (readBlockInternal2), using (an unimplemented) GetOrPopulate in sumeerbhola@bac895b. PTAL. By itself, it isn't simpler IMO. But it does hide ReadHandle so it is perhaps compelling enough. Thoughts?

Yeah, that looks worse.. I was envisioning that the function it calls would be a separate method, but I haven't considered all the details.

Why do we need a separate WaitForReadPermissionOrHandle? Can't this happen inside GetWithReadHandle - it could return either the data from the cache (regardless if we had to wait for it), or the handle that we will use to populate the cache after we do the read. I think only the first to create the entry should do the read, and anyone who comes in after should wait.

CC @jbowens as well


sstable/reader.go line 505 at r2 (raw file):

	if env.BufferPool == nil {
		ch, crh = r.cacheOpts.Cache.GetWithReadHandle(
			r.cacheOpts.CacheID, r.cacheOpts.FileNum, bh.Offset, r.loadBlockSema)

Why do we need to move the semaphore acquire into the cache code? That should just happen right before we issue the read, with the expectation that outside of strange conditions it won't do anything (and if it does, it should just behave like a slower IO). We can treat a ctx cancelation error the same way we would treat an IO error.


sstable/reader.go line 552 at r2 (raw file):

	// INVARIANT: !ch.Valid().

	compressed := block.Alloc(int(bh.Length+block.TrailerLen), env.BufferPool)

It would be cleaner to separate out the code starting here into a method that does the actual read and returns the decompressed block handle. We can SetReadError more cleanly after that call (this is error prone if someone adds an if err := ... case).

@jbowens jbowens requested a review from RaduBerinde November 14, 2024 17:19
Copy link
Collaborator

@jbowens jbowens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 8 files reviewed, 4 unresolved discussions (waiting on @RaduBerinde and @sumeerbhola)


internal/cache/read_shard.go line 92 at r2 (raw file):

NB: we cannot place the loadBlockSema in the readShard since the readShard can be shared across the DBs, just like the block cache.

Should we make the load block semaphore shared across DBs, like the block cache? The primary motivation was preventing excessive memory utilization from many concurrent reads, right? And memory is a shared resource across DBs.

@RaduBerinde
Copy link
Member

Should we make the load block semaphore shared across DBs, like the block cache? The primary motivation was preventing excessive memory utilization from many concurrent reads, right? And memory is a shared resource across DBs.

That would have been easier to implement, but there was concern that one store that is broken and has very slow IOs will block out all other stores.
https://github.com/cockroachdb/cockroach/blob/3644f0d3fe77c03c20d3b603c1c0eb6f335e7e15/pkg/storage/pebble.go#L997

Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reworked the logic based on the comments. I'll start adding tests if this seems reasonable.

Reviewable status: 0 of 8 files reviewed, 4 unresolved discussions (waiting on @jbowens and @RaduBerinde)


internal/cache/read_shard.go line 92 at r2 (raw file):

Previously, jbowens (Jackson Owens) wrote…

NB: we cannot place the loadBlockSema in the readShard since the readShard can be shared across the DBs, just like the block cache.

Should we make the load block semaphore shared across DBs, like the block cache? The primary motivation was preventing excessive memory utilization from many concurrent reads, right? And memory is a shared resource across DBs.

As @RaduBerinde mentioned, this choice was made for performance isolation of stores.


sstable/reader.go line 525 at r1 (raw file):

Why do we need a separate WaitForReadPermissionOrHandle? Can't this happen inside GetWithReadHandle.

Yes, it can. It involves passing the context and needs additional return values. I've made the change.


sstable/reader.go line 505 at r2 (raw file):

Previously, RaduBerinde wrote…

Why do we need to move the semaphore acquire into the cache code? That should just happen right before we issue the read, with the expectation that outside of strange conditions it won't do anything (and if it does, it should just behave like a slower IO). We can treat a ctx cancelation error the same way we would treat an IO error.

Initially it felt cleaner to move all the waiting in one place. And it required less explanation about the contract. But given that we need to still do semaphore acquisition for the BufferPool case, I've moved it back.

The contract comment now says:

// The caller must immediately start doing a read, or can first wait on a
// shared resource that would also block a different reader if it was assigned
// the turn instead (specifically, this refers to Options.LoadBlockSema).

sstable/reader.go line 552 at r2 (raw file):

Previously, RaduBerinde wrote…

It would be cleaner to separate out the code starting here into a method that does the actual read and returns the decompressed block handle. We can SetReadError more cleanly after that call (this is error prone if someone adds an if err := ... case).

Good point. Done

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach looks good to me. I will make another pass in more detail but you can start adding tests.

Reviewable status: 0 of 8 files reviewed, 5 unresolved discussions (waiting on @jbowens and @sumeerbhola)


internal/cache/clockpro.go line 206 at r3 (raw file):

			// etHot. But etTest is "colder" than etCold, since the only transition
			// into etTest is etCold => etTest, so since etTest transitions to
			// etHot, then etCold should also transition.

I think you're right. The paper says that if the page isn't in the list it is added as cold (the e == nil case) but if it is "the faulted page turns into a hot page and is placed at the head of the list"

Maybe file an issue? We'd want it in a separate PR

Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 8 files reviewed, 5 unresolved discussions (waiting on @jbowens and @RaduBerinde)


internal/cache/clockpro.go line 206 at r3 (raw file):

Previously, RaduBerinde wrote…

I think you're right. The paper says that if the page isn't in the list it is added as cold (the e == nil case) but if it is "the faulted page turns into a hot page and is placed at the head of the list"

Maybe file an issue? We'd want it in a separate PR

#4178

@sumeerbhola sumeerbhola force-pushed the cache_parallel branch 3 times, most recently from 1b49df6 to f299ee0 Compare December 5, 2024 02:03
Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests are ready.

Reviewable status: 0 of 10 files reviewed, 5 unresolved discussions (waiting on @jbowens and @RaduBerinde)

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: Very cool!

Reviewable status: 0 of 10 files reviewed, 4 unresolved discussions (waiting on @jbowens and @sumeerbhola)


internal/cache/clockpro.go line 141 at r4 (raw file):

// Cache.{Get,GetWithReadHandle}. When desireReadEntry is true, and the block
// is not in the cache (!Handle.Valid()), a non-nil readEntry is returned.
func (c *shard) GetWithMaybeReadEntry(

[nit] does it need to be exported?


internal/cache/read_shard.go line 69 at r4 (raw file):

//     separate map. This separation also results in more modular code, instead
//     of piling more stuff into shard.
type readShard struct {

[nit] Would it be cleaner to just add readMap to shard and just implement the methods on *shard? We can still keep them separate.


internal/cache/read_shard.go line 226 at r4 (raw file):

		return errorDuration
	}
start:

We should do for { and use continue. Well, we don't even need continue, we can do

case _, ok := <-ch:
  if !ok {
    ...
    return ..
  }
  // Probably granted permission to do the read; check again. NB: since isReading is
  // false, someone else can slip through before this thread acquires
  // e.mu, and take the turn.

Concurrent reads of the same block have been observed to cause very high
memory usage, and cause significant CPU usage for allocations/deallocations.
We now coordinate across multiple concurrent attempts to read the same
block via a readEntry, which makes the readers take turns until one
succeeds.

The readEntries are embedded in a map that is part of a readShard, where
there is a readShard for each cache.Shard. See the long comment in the
readShard declaration for motivation.

Callers interact with this new behavior via Cache.GetWithReadHandle,
which is only for callers that intend to do a read and then populate
the cache. If this method returns a ReadHandle, the caller has permission
to do a read. See the ReadHandle comment for details of the contract.

Fixes cockroachdb#4138
Copy link
Collaborator Author

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR!

Reviewable status: 0 of 10 files reviewed, 4 unresolved discussions (waiting on @jbowens and @RaduBerinde)


internal/cache/clockpro.go line 141 at r4 (raw file):

Previously, RaduBerinde wrote…

[nit] does it need to be exported?

Fixed. I fell into the trap of following the pattern of other methods in shard being exported (Set/Delete/...), none of which need to be.


internal/cache/read_shard.go line 69 at r4 (raw file):

Previously, RaduBerinde wrote…

[nit] Would it be cleaner to just add readMap to shard and just implement the methods on *shard? We can still keep them separate.

This is subjective, but I like the bigger separation. Putting this data-structure in shard will move a bunch of commentary above into the shard declaration. Some of the methods could still be in this file of course. Keeping this more separate I think makes our code more easy to maintain.


internal/cache/read_shard.go line 226 at r4 (raw file):

Previously, RaduBerinde wrote…

We should do for { and use continue. Well, we don't even need continue, we can do

case _, ok := <-ch:
  if !ok {
    ...
    return ..
  }
  // Probably granted permission to do the read; check again. NB: since isReading is
  // false, someone else can slip through before this thread acquires
  // e.mu, and take the turn.

Good point. Done.

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 5 of 10 files at r4, 5 of 5 files at r5, all commit messages.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @jbowens)


internal/cache/read_shard.go line 69 at r4 (raw file):

Previously, sumeerbhola wrote…

This is subjective, but I like the bigger separation. Putting this data-structure in shard will move a bunch of commentary above into the shard declaration. Some of the methods could still be in this file of course. Keeping this more separate I think makes our code more easy to maintain.

👍

@sumeerbhola sumeerbhola merged commit 4949684 into cockroachdb:master Dec 11, 2024
23 checks passed
@RaduBerinde
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cache: don't load the same block in parallel
4 participants