Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: pool pebbleReadOnly allocations #63845

Merged

Conversation

nvanbenschoten
Copy link
Member

@nvanbenschoten nvanbenschoten commented Apr 19, 2021

This commit introduces object pooling for pebbleReadOnly allocation avoidance. I found this to be important both because it avoids the initial pebbleReadOnly allocation, but also because it allows the memory recycling inside of each pebbleIterator owned by a pebbleReadOnly to work correctly.

I found this while running a few microbenchmarks and noticing that the lockTable's calls to intentInterleavingReader were resulting in a large number of heap allocations in (*pebbleReadOnly).NewEngineIterator. This was because lowerBoundBuf and upperBoundBuf were always nil and so each append (all 4 of them) in (*pebbleIterator).init was causing an allocation.

name                          old time/op    new time/op    delta
KV/Scan/Native/rows=1-16        30.9µs ± 4%    29.9µs ± 6%   -3.29%  (p=0.000 n=20+20)
KV/Scan/Native/rows=100-16      54.2µs ± 4%    52.7µs ± 5%   -2.84%  (p=0.002 n=20+20)
KV/Scan/Native/rows=10-16       34.0µs ± 3%    33.1µs ± 6%   -2.64%  (p=0.001 n=20+20)
KV/Scan/Native/rows=1000-16      253µs ± 5%     255µs ± 5%     ~     (p=0.659 n=20+20)
KV/Scan/Native/rows=10000-16    2.16ms ± 4%    2.14ms ± 3%     ~     (p=0.072 n=20+20)

name                          old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16        8.69kB ± 0%    7.49kB ± 0%  -13.79%  (p=0.000 n=20+19)
KV/Scan/Native/rows=10-16       10.1kB ± 0%     8.9kB ± 0%  -11.87%  (p=0.000 n=20+18)
KV/Scan/Native/rows=100-16      22.7kB ± 0%    21.5kB ± 0%   -5.29%  (p=0.000 n=17+19)
KV/Scan/Native/rows=1000-16      174kB ± 0%     172kB ± 0%   -0.66%  (p=0.000 n=19+19)
KV/Scan/Native/rows=10000-16    1.51MB ± 0%    1.51MB ± 0%   -0.05%  (p=0.000 n=16+19)

name                          old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16          71.0 ± 0%      62.0 ± 0%  -12.68%  (p=0.000 n=20+20)
KV/Scan/Native/rows=10-16         75.0 ± 0%      66.0 ± 0%  -12.00%  (p=0.000 n=20+19)
KV/Scan/Native/rows=100-16        79.0 ± 0%      70.0 ± 0%  -11.39%  (p=0.000 n=19+19)
KV/Scan/Native/rows=1000-16       87.8 ± 1%      79.0 ± 0%   -9.97%  (p=0.000 n=20+16)
KV/Scan/Native/rows=10000-16       113 ± 2%       103 ± 2%   -8.19%  (p=0.000 n=17+19)

We may want to consider this as a candidate to backport to release-21.1, because the lack of pooling here was even more detrimental with the separated lockTable, which creates a separate EngineIterator with lower and upper bounds. So this may have a small impact on #62078.

Release note (performance improvement): A series of heap allocations performed when serving read-only queries are now avoided.

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@nvanbenschoten nvanbenschoten force-pushed the nvanbenschoten/poolPebbleRO branch from 82000d1 to 644b8d8 Compare April 19, 2021 16:36
Copy link
Collaborator

@stevendanna stevendanna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r1.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten and @sumeerbhola)

a discussion (no related file):
Nice!


Copy link
Collaborator

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this -- I wrote that TODO and it then slipped my mind.
I'd be happy to see this backported.

:lgtm:

Reviewed 1 of 1 files at r1.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)


pkg/storage/pebble.go, line 1282 at r1 (raw file):

// Instantiates a new pebbleReadOnly.
func newPebbleReadOnly(parent *Pebble) *pebbleReadOnly {
	p := pebbleReadOnlyPool.Get().(*pebbleReadOnly)

Can you add a comment here stating something like:
// When p is a reused pebbleReadOnly from the pool, the iter fields preserve the original reusable=true that was set above in New(), and some buffers that are safe to reuse. Everything else has been reset by pebbleIterator.destroy().

This commit introduces object pooling for `pebbleReadOnly` allocation avoidance.
I found this to be important both because it avoids the initial `pebbleReadOnly`
allocation, but also because it allows the memory recycling inside of each
`pebbleIterator` owned by a `pebbleReadOnly` to work correctly.

I found this while running a few microbenchmarks and noticing that the
lockTable's calls to `intentInterleavingReader` were resulting in a large number
of heap allocations in `(*pebbleReadOnly).NewEngineIterator`. This was because
`lowerBoundBuf` and `upperBoundBuf` were always nil and so each append (all 4 of
them) in `(*pebbleIterator).init` was causing an allocation.

```
name                          old time/op    new time/op    delta
KV/Scan/Native/rows=1-16        30.9µs ± 4%    29.9µs ± 6%   -3.29%  (p=0.000 n=20+20)
KV/Scan/Native/rows=100-16      54.2µs ± 4%    52.7µs ± 5%   -2.84%  (p=0.002 n=20+20)
KV/Scan/Native/rows=10-16       34.0µs ± 3%    33.1µs ± 6%   -2.64%  (p=0.001 n=20+20)
KV/Scan/Native/rows=1000-16      253µs ± 5%     255µs ± 5%     ~     (p=0.659 n=20+20)
KV/Scan/Native/rows=10000-16    2.16ms ± 4%    2.14ms ± 3%     ~     (p=0.072 n=20+20)

name                          old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16        8.69kB ± 0%    7.49kB ± 0%  -13.79%  (p=0.000 n=20+19)
KV/Scan/Native/rows=10-16       10.1kB ± 0%     8.9kB ± 0%  -11.87%  (p=0.000 n=20+18)
KV/Scan/Native/rows=100-16      22.7kB ± 0%    21.5kB ± 0%   -5.29%  (p=0.000 n=17+19)
KV/Scan/Native/rows=1000-16      174kB ± 0%     172kB ± 0%   -0.66%  (p=0.000 n=19+19)
KV/Scan/Native/rows=10000-16    1.51MB ± 0%    1.51MB ± 0%   -0.05%  (p=0.000 n=16+19)

name                          old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16          71.0 ± 0%      62.0 ± 0%  -12.68%  (p=0.000 n=20+20)
KV/Scan/Native/rows=10-16         75.0 ± 0%      66.0 ± 0%  -12.00%  (p=0.000 n=20+19)
KV/Scan/Native/rows=100-16        79.0 ± 0%      70.0 ± 0%  -11.39%  (p=0.000 n=19+19)
KV/Scan/Native/rows=1000-16       87.8 ± 1%      79.0 ± 0%   -9.97%  (p=0.000 n=20+16)
KV/Scan/Native/rows=10000-16       113 ± 2%       103 ± 2%   -8.19%  (p=0.000 n=17+19)
```

We may want to consider this as a candidate to backport to release-21.1, because
the lack of pooling here was even more detrimental with the separated lockTable,
which creates a separate EngineIterator. So this may have a small impact on cockroachdb#62078.

Release note (performance improvement): A series of heap allocations performed
when serving read-only queries are now avoided.
@nvanbenschoten nvanbenschoten force-pushed the nvanbenschoten/poolPebbleRO branch from 644b8d8 to f01e15d Compare April 20, 2021 18:17
Copy link
Member Author

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR! I'll backport to release-21.1 once this merges.

bors r+

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @stevendanna and @sumeerbhola)


pkg/storage/pebble.go, line 1282 at r1 (raw file):

Previously, sumeerbhola wrote…

Can you add a comment here stating something like:
// When p is a reused pebbleReadOnly from the pool, the iter fields preserve the original reusable=true that was set above in New(), and some buffers that are safe to reuse. Everything else has been reset by pebbleIterator.destroy().

Done.

@craig
Copy link
Contributor

craig bot commented Apr 21, 2021

Build succeeded:

@craig craig bot merged commit 075aa5d into cockroachdb:master Apr 21, 2021
@nvanbenschoten nvanbenschoten deleted the nvanbenschoten/poolPebbleRO branch April 22, 2021 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants