-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: pool pebbleReadOnly allocations #63845
storage: pool pebbleReadOnly allocations #63845
Conversation
82000d1
to
644b8d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r1.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten and @sumeerbhola)
a discussion (no related file):
Nice!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this -- I wrote that TODO and it then slipped my mind.
I'd be happy to see this backported.
Reviewed 1 of 1 files at r1.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/storage/pebble.go, line 1282 at r1 (raw file):
// Instantiates a new pebbleReadOnly. func newPebbleReadOnly(parent *Pebble) *pebbleReadOnly { p := pebbleReadOnlyPool.Get().(*pebbleReadOnly)
Can you add a comment here stating something like:
// When p is a reused pebbleReadOnly from the pool, the iter fields preserve the original reusable=true that was set above in New(), and some buffers that are safe to reuse. Everything else has been reset by pebbleIterator.destroy().
This commit introduces object pooling for `pebbleReadOnly` allocation avoidance. I found this to be important both because it avoids the initial `pebbleReadOnly` allocation, but also because it allows the memory recycling inside of each `pebbleIterator` owned by a `pebbleReadOnly` to work correctly. I found this while running a few microbenchmarks and noticing that the lockTable's calls to `intentInterleavingReader` were resulting in a large number of heap allocations in `(*pebbleReadOnly).NewEngineIterator`. This was because `lowerBoundBuf` and `upperBoundBuf` were always nil and so each append (all 4 of them) in `(*pebbleIterator).init` was causing an allocation. ``` name old time/op new time/op delta KV/Scan/Native/rows=1-16 30.9µs ± 4% 29.9µs ± 6% -3.29% (p=0.000 n=20+20) KV/Scan/Native/rows=100-16 54.2µs ± 4% 52.7µs ± 5% -2.84% (p=0.002 n=20+20) KV/Scan/Native/rows=10-16 34.0µs ± 3% 33.1µs ± 6% -2.64% (p=0.001 n=20+20) KV/Scan/Native/rows=1000-16 253µs ± 5% 255µs ± 5% ~ (p=0.659 n=20+20) KV/Scan/Native/rows=10000-16 2.16ms ± 4% 2.14ms ± 3% ~ (p=0.072 n=20+20) name old alloc/op new alloc/op delta KV/Scan/Native/rows=1-16 8.69kB ± 0% 7.49kB ± 0% -13.79% (p=0.000 n=20+19) KV/Scan/Native/rows=10-16 10.1kB ± 0% 8.9kB ± 0% -11.87% (p=0.000 n=20+18) KV/Scan/Native/rows=100-16 22.7kB ± 0% 21.5kB ± 0% -5.29% (p=0.000 n=17+19) KV/Scan/Native/rows=1000-16 174kB ± 0% 172kB ± 0% -0.66% (p=0.000 n=19+19) KV/Scan/Native/rows=10000-16 1.51MB ± 0% 1.51MB ± 0% -0.05% (p=0.000 n=16+19) name old allocs/op new allocs/op delta KV/Scan/Native/rows=1-16 71.0 ± 0% 62.0 ± 0% -12.68% (p=0.000 n=20+20) KV/Scan/Native/rows=10-16 75.0 ± 0% 66.0 ± 0% -12.00% (p=0.000 n=20+19) KV/Scan/Native/rows=100-16 79.0 ± 0% 70.0 ± 0% -11.39% (p=0.000 n=19+19) KV/Scan/Native/rows=1000-16 87.8 ± 1% 79.0 ± 0% -9.97% (p=0.000 n=20+16) KV/Scan/Native/rows=10000-16 113 ± 2% 103 ± 2% -8.19% (p=0.000 n=17+19) ``` We may want to consider this as a candidate to backport to release-21.1, because the lack of pooling here was even more detrimental with the separated lockTable, which creates a separate EngineIterator. So this may have a small impact on cockroachdb#62078. Release note (performance improvement): A series of heap allocations performed when serving read-only queries are now avoided.
644b8d8
to
f01e15d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TFTR! I'll backport to release-21.1
once this merges.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @stevendanna and @sumeerbhola)
pkg/storage/pebble.go, line 1282 at r1 (raw file):
Previously, sumeerbhola wrote…
Can you add a comment here stating something like:
// When p is a reused pebbleReadOnly from the pool, the iter fields preserve the original reusable=true that was set above in New(), and some buffers that are safe to reuse. Everything else has been reset by pebbleIterator.destroy().
Done.
Build succeeded: |
This commit introduces object pooling for
pebbleReadOnly
allocation avoidance. I found this to be important both because it avoids the initialpebbleReadOnly
allocation, but also because it allows the memory recycling inside of eachpebbleIterator
owned by apebbleReadOnly
to work correctly.I found this while running a few microbenchmarks and noticing that the lockTable's calls to
intentInterleavingReader
were resulting in a large number of heap allocations in(*pebbleReadOnly).NewEngineIterator
. This was becauselowerBoundBuf
andupperBoundBuf
were always nil and so each append (all 4 of them) in(*pebbleIterator).init
was causing an allocation.We may want to consider this as a candidate to backport to release-21.1, because the lack of pooling here was even more detrimental with the separated lockTable, which creates a separate
EngineIterator
with lower and upper bounds. So this may have a small impact on #62078.Release note (performance improvement): A series of heap allocations performed when serving read-only queries are now avoided.