Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kv: don't embed RangeDescriptor and Lease in EvictionToken, avoid routing allocs #66374

Merged

Conversation

nvanbenschoten
Copy link
Member

@nvanbenschoten nvanbenschoten commented Jun 11, 2021

This PR contains a series of changes to avoid heap allocations in the hot path of KV processing.

kv: avoid heap allocation for single-range routing

For request's that only contain a single "part", we avoid the heap allocation in DistSender.Send.

kv: don't embed RangeDescriptor and Lease in EvictionToken

This commit switches the handling of the RangeDescriptor and Lease fields in EvictionToken to be stored by reference, pointing into an immutable CacheEntry object instead of copying and storing these fields by value. This change is made because storing the fields by value was causing the object to escape to the heap all over the place. It was too easy to Call Desc() or Leaseholder() and then have this reference escape, which meant that we were allocating a single EvictionToken multiple times throughout the DistSender stack.

A secondary benefit of this change is that it removes memory copies and shrinks the size of the EvictionToken from 224 bytes to 40 bytes.


These changes combine to have the following impact:

name                            old time/op    new time/op    delta
KV/Scan/Native/rows=1-16          30.8µs ± 9%    29.0µs ± 3%  -5.98%  (p=0.000 n=10+9)
KV/Scan/Native/rows=100-16        53.2µs ± 2%    51.4µs ± 1%  -3.24%  (p=0.000 n=9+8)
KV/Scan/Native/rows=10-16         32.9µs ± 3%    32.0µs ± 4%  -2.84%  (p=0.004 n=9+10)
KV/Update/Native/rows=1-16         155µs ± 4%     152µs ± 3%  -1.80%  (p=0.024 n=9+9)
KV/Scan/Native/rows=1000-16        251µs ± 2%     247µs ± 2%  -1.54%  (p=0.046 n=8+9)
KV/Insert/Native/rows=1-16        97.7µs ± 4%    98.7µs ± 3%    ~     (p=0.133 n=9+10)
KV/Insert/Native/rows=10-16        140µs ± 2%     140µs ± 3%    ~     (p=0.661 n=10+9)
KV/Insert/Native/rows=100-16       503µs ± 4%     500µs ± 4%    ~     (p=0.739 n=10+10)
KV/Insert/Native/rows=1000-16     3.85ms ± 5%    3.94ms ± 7%    ~     (p=0.143 n=10+10)
KV/Insert/Native/rows=10000-16    42.8ms ± 9%    41.5ms ± 6%    ~     (p=0.280 n=10+10)
KV/Update/Native/rows=10-16        335µs ± 3%     333µs ± 2%    ~     (p=0.243 n=10+9)
KV/Update/Native/rows=100-16      1.87ms ± 9%    1.85ms ± 6%    ~     (p=0.842 n=10+9)
KV/Update/Native/rows=1000-16     15.9ms ± 5%    16.3ms ± 6%    ~     (p=0.105 n=10+10)
KV/Update/Native/rows=10000-16     130ms ± 2%     133ms ± 7%    ~     (p=0.122 n=8+10)
KV/Delete/Native/rows=1-16        98.4µs ± 3%    98.6µs ± 3%    ~     (p=0.971 n=10+10)
KV/Delete/Native/rows=10-16        154µs ± 2%     154µs ± 2%    ~     (p=0.645 n=8+8)
KV/Delete/Native/rows=100-16       601µs ± 3%     608µs ± 3%    ~     (p=0.222 n=9+9)
KV/Delete/Native/rows=1000-16     4.92ms ± 6%    5.00ms ±12%    ~     (p=0.796 n=10+10)
KV/Delete/Native/rows=10000-16    51.8ms ±10%    51.4ms ± 9%    ~     (p=0.796 n=10+10)
KV/Scan/Native/rows=10000-16      2.13ms ± 5%    2.14ms ± 5%    ~     (p=0.912 n=10+10)

name                            old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16          7.15kB ± 0%    6.46kB ± 0%  -9.68%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10-16         8.56kB ± 0%    7.87kB ± 0%  -8.08%  (p=0.000 n=10+9)
KV/Update/Native/rows=1-16        21.7kB ± 1%    20.3kB ± 1%  -6.38%  (p=0.000 n=10+10)
KV/Insert/Native/rows=1-16        14.7kB ± 1%    14.0kB ± 0%  -5.09%  (p=0.000 n=10+9)
KV/Delete/Native/rows=1-16        14.2kB ± 1%    13.6kB ± 2%  -4.28%  (p=0.000 n=10+10)
KV/Scan/Native/rows=100-16        21.1kB ± 0%    20.4kB ± 0%  -3.25%  (p=0.000 n=10+10)
KV/Update/Native/rows=10-16       64.8kB ± 0%    63.2kB ± 0%  -2.34%  (p=0.000 n=9+9)
KV/Delete/Native/rows=10-16       35.1kB ± 1%    34.4kB ± 0%  -2.15%  (p=0.000 n=10+10)
KV/Insert/Native/rows=10-16       38.9kB ± 1%    38.2kB ± 1%  -1.80%  (p=0.000 n=10+10)
KV/Insert/Native/rows=100-16       277kB ± 1%     275kB ± 1%  -0.74%  (p=0.011 n=10+10)
KV/Update/Native/rows=100-16       485kB ± 0%     483kB ± 1%  -0.43%  (p=0.002 n=10+10)
KV/Scan/Native/rows=1000-16        172kB ± 0%     171kB ± 0%  -0.42%  (p=0.000 n=9+10)
KV/Scan/Native/rows=10000-16      1.51MB ± 0%    1.51MB ± 0%  -0.06%  (p=0.000 n=9+10)
KV/Insert/Native/rows=1000-16     2.54MB ± 1%    2.54MB ± 0%    ~     (p=0.684 n=10+10)
KV/Insert/Native/rows=10000-16    33.9MB ± 1%    34.1MB ± 2%    ~     (p=0.146 n=8+10)
KV/Update/Native/rows=1000-16     4.57MB ± 1%    4.56MB ± 1%    ~     (p=0.123 n=10+10)
KV/Update/Native/rows=10000-16    64.5MB ± 1%    64.7MB ± 2%    ~     (p=0.739 n=10+10)
KV/Delete/Native/rows=100-16       241kB ± 0%     240kB ± 1%    ~     (p=0.143 n=10+10)
KV/Delete/Native/rows=1000-16     2.22MB ± 1%    2.21MB ± 1%    ~     (p=0.113 n=9+10)
KV/Delete/Native/rows=10000-16    30.0MB ± 2%    30.1MB ± 1%    ~     (p=0.165 n=10+10)

name                            old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16            52.0 ± 0%      48.0 ± 0%  -7.69%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10-16           56.0 ± 0%      52.0 ± 0%  -7.14%  (p=0.000 n=10+9)
KV/Scan/Native/rows=100-16          60.0 ± 0%      56.0 ± 0%  -6.67%  (p=0.000 n=10+10)
KV/Scan/Native/rows=1000-16         69.0 ± 0%      65.0 ± 0%  -5.80%  (p=0.000 n=9+9)
KV/Update/Native/rows=1-16           181 ± 0%       173 ± 0%  -4.42%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10000-16        96.3 ± 1%      92.4 ± 4%  -4.00%  (p=0.000 n=8+10)
KV/Delete/Native/rows=1-16           116 ± 0%       112 ± 0%  -3.45%  (p=0.000 n=10+9)
KV/Insert/Native/rows=1-16           117 ± 0%       113 ± 0%  -3.42%  (p=0.001 n=8+9)
KV/Update/Native/rows=10-16          442 ± 0%       434 ± 0%  -1.76%  (p=0.000 n=8+9)
KV/Delete/Native/rows=10-16          235 ± 0%       231 ± 0%  -1.70%  (p=0.000 n=9+10)
KV/Insert/Native/rows=10-16          264 ± 0%       260 ± 0%  -1.52%  (p=0.000 n=10+9)
KV/Delete/Native/rows=100-16       1.26k ± 0%     1.25k ± 0%  -0.33%  (p=0.000 n=7+10)
KV/Update/Native/rows=100-16       2.65k ± 0%     2.64k ± 0%  -0.32%  (p=0.000 n=10+10)
KV/Insert/Native/rows=100-16       1.56k ± 0%     1.55k ± 0%  -0.28%  (p=0.000 n=10+6)
KV/Update/Native/rows=1000-16      25.9k ± 0%     25.9k ± 0%  -0.05%  (p=0.001 n=10+10)
KV/Delete/Native/rows=1000-16      11.3k ± 0%     11.3k ± 0%  -0.04%  (p=0.012 n=10+9)
KV/Insert/Native/rows=1000-16      14.3k ± 0%     14.3k ± 0%  -0.03%  (p=0.030 n=10+10)
KV/Insert/Native/rows=10000-16      141k ± 0%      141k ± 0%    ~     (p=0.839 n=10+10)
KV/Update/Native/rows=10000-16      263k ± 0%      262k ± 0%    ~     (p=0.101 n=10+10)
KV/Delete/Native/rows=10000-16      111k ± 0%      111k ± 0%    ~     (p=0.382 n=10+10)

@nvanbenschoten nvanbenschoten requested review from andreimatei, a team and dt and removed request for a team June 11, 2021 16:45
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@nvanbenschoten nvanbenschoten removed the request for review from dt June 11, 2021 16:45
@nvanbenschoten nvanbenschoten force-pushed the nvanbenschoten/distSenderOptimizations branch from bb88061 to 3bab090 Compare June 11, 2021 19:26
Copy link
Contributor

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 1 files at r5.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)

Copy link
Contributor

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is made because storing the fields by value
was causing the object to escape to the heap all over the place. It was
too easy to Call Desc() or Leaseholder() and then have this
reference escape

This allocation shows up in the analysis output as the receiver of the Leaseholder method escaping?

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)

@nvanbenschoten
Copy link
Member Author

This allocation shows up in the analysis output as the receiver of the Leaseholder method escaping?

Yes. Here's the output of goescape before and after the final commit in this PR.

before

➜ goescape . | grep moved | grep dist_sender.go
./dist_sender.go:327:20: moved to heap: cfg
./dist_sender.go:564:34: moved to heap: rangecache.et
./dist_sender.go:566:55: moved to heap: rangecache.et
./dist_sender.go:1747:22: moved to heap: rangecache.et
./dist_sender.go:1749:36: moved to heap: rangecache.et
./dist_sender.go:1098:24: moved to heap: roachpb.t
./dist_sender.go:1130:101: moved to heap: rangecache.et
./dist_sender.go:1484:9: moved to heap: s
./dist_sender.go:1417:41: moved to heap: rangecache.et
./dist_sender.go:1467:53: moved to heap: rangecache.et
./dist_sender.go:1485:66: moved to heap: rangecache.et
./dist_sender.go:1567:22: moved to heap: rangecache.et
./dist_sender.go:1494:10: moved to heap: s
./dist_sender.go:1339:2: moved to heap: ba
./dist_sender.go:1341:2: moved to heap: routing
./dist_sender.go:879:2: moved to heap: qiBa

after

➜ goescape . | grep moved | grep dist_sender.go
./dist_sender.go:327:20: moved to heap: cfg
./dist_sender.go:1098:24: moved to heap: roachpb.t
./dist_sender.go:1484:9: moved to heap: s
./dist_sender.go:1494:10: moved to heap: s
./dist_sender.go:1339:2: moved to heap: ba
./dist_sender.go:879:2: moved to heap: qiBa

@nvanbenschoten
Copy link
Member Author

TFTR!

bors r+

@craig
Copy link
Contributor

craig bot commented Jun 25, 2021

Build failed:

@nvanbenschoten
Copy link
Member Author

logictest flake. I've run it 4 more times without seeing the same timeout. The stacktraces didn't look like anything was stuck, just slow, which doesn't surprise me given how slow TC is to touch today.

bors r+

craig bot pushed a commit that referenced this pull request Jun 25, 2021
66374: kv: don't embed RangeDescriptor and Lease in EvictionToken, avoid routing allocs r=nvanbenschoten a=nvanbenschoten

This PR contains a series of changes to avoid heap allocations in the hot path of KV processing.

### kv: avoid heap allocation for single-range routing

For request's that only contain a single "part", we avoid the heap allocation in `DistSender.Send`.

### kv: don't embed RangeDescriptor and Lease in EvictionToken

This commit switches the handling of the RangeDescriptor and Lease fields in EvictionToken to be stored by reference, pointing into an immutable `CacheEntry` object instead of copying and storing these fields by value. This change is made because storing the fields by value was causing the object to escape to the heap all over the place. It was too easy to Call `Desc()` or `Leaseholder()` and then have this reference escape, which meant that we were allocating a single `EvictionToken` multiple times throughout the DistSender stack.

A secondary benefit of this change is that it removes memory copies and shrinks the size of the EvictionToken from 224 bytes to 40 bytes.

----

These changes combine to have the following impact:

```
name                            old time/op    new time/op    delta
KV/Scan/Native/rows=1-16          30.8µs ± 9%    29.0µs ± 3%  -5.98%  (p=0.000 n=10+9)
KV/Scan/Native/rows=100-16        53.2µs ± 2%    51.4µs ± 1%  -3.24%  (p=0.000 n=9+8)
KV/Scan/Native/rows=10-16         32.9µs ± 3%    32.0µs ± 4%  -2.84%  (p=0.004 n=9+10)
KV/Update/Native/rows=1-16         155µs ± 4%     152µs ± 3%  -1.80%  (p=0.024 n=9+9)
KV/Scan/Native/rows=1000-16        251µs ± 2%     247µs ± 2%  -1.54%  (p=0.046 n=8+9)
KV/Insert/Native/rows=1-16        97.7µs ± 4%    98.7µs ± 3%    ~     (p=0.133 n=9+10)
KV/Insert/Native/rows=10-16        140µs ± 2%     140µs ± 3%    ~     (p=0.661 n=10+9)
KV/Insert/Native/rows=100-16       503µs ± 4%     500µs ± 4%    ~     (p=0.739 n=10+10)
KV/Insert/Native/rows=1000-16     3.85ms ± 5%    3.94ms ± 7%    ~     (p=0.143 n=10+10)
KV/Insert/Native/rows=10000-16    42.8ms ± 9%    41.5ms ± 6%    ~     (p=0.280 n=10+10)
KV/Update/Native/rows=10-16        335µs ± 3%     333µs ± 2%    ~     (p=0.243 n=10+9)
KV/Update/Native/rows=100-16      1.87ms ± 9%    1.85ms ± 6%    ~     (p=0.842 n=10+9)
KV/Update/Native/rows=1000-16     15.9ms ± 5%    16.3ms ± 6%    ~     (p=0.105 n=10+10)
KV/Update/Native/rows=10000-16     130ms ± 2%     133ms ± 7%    ~     (p=0.122 n=8+10)
KV/Delete/Native/rows=1-16        98.4µs ± 3%    98.6µs ± 3%    ~     (p=0.971 n=10+10)
KV/Delete/Native/rows=10-16        154µs ± 2%     154µs ± 2%    ~     (p=0.645 n=8+8)
KV/Delete/Native/rows=100-16       601µs ± 3%     608µs ± 3%    ~     (p=0.222 n=9+9)
KV/Delete/Native/rows=1000-16     4.92ms ± 6%    5.00ms ±12%    ~     (p=0.796 n=10+10)
KV/Delete/Native/rows=10000-16    51.8ms ±10%    51.4ms ± 9%    ~     (p=0.796 n=10+10)
KV/Scan/Native/rows=10000-16      2.13ms ± 5%    2.14ms ± 5%    ~     (p=0.912 n=10+10)

name                            old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16          7.15kB ± 0%    6.46kB ± 0%  -9.68%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10-16         8.56kB ± 0%    7.87kB ± 0%  -8.08%  (p=0.000 n=10+9)
KV/Update/Native/rows=1-16        21.7kB ± 1%    20.3kB ± 1%  -6.38%  (p=0.000 n=10+10)
KV/Insert/Native/rows=1-16        14.7kB ± 1%    14.0kB ± 0%  -5.09%  (p=0.000 n=10+9)
KV/Delete/Native/rows=1-16        14.2kB ± 1%    13.6kB ± 2%  -4.28%  (p=0.000 n=10+10)
KV/Scan/Native/rows=100-16        21.1kB ± 0%    20.4kB ± 0%  -3.25%  (p=0.000 n=10+10)
KV/Update/Native/rows=10-16       64.8kB ± 0%    63.2kB ± 0%  -2.34%  (p=0.000 n=9+9)
KV/Delete/Native/rows=10-16       35.1kB ± 1%    34.4kB ± 0%  -2.15%  (p=0.000 n=10+10)
KV/Insert/Native/rows=10-16       38.9kB ± 1%    38.2kB ± 1%  -1.80%  (p=0.000 n=10+10)
KV/Insert/Native/rows=100-16       277kB ± 1%     275kB ± 1%  -0.74%  (p=0.011 n=10+10)
KV/Update/Native/rows=100-16       485kB ± 0%     483kB ± 1%  -0.43%  (p=0.002 n=10+10)
KV/Scan/Native/rows=1000-16        172kB ± 0%     171kB ± 0%  -0.42%  (p=0.000 n=9+10)
KV/Scan/Native/rows=10000-16      1.51MB ± 0%    1.51MB ± 0%  -0.06%  (p=0.000 n=9+10)
KV/Insert/Native/rows=1000-16     2.54MB ± 1%    2.54MB ± 0%    ~     (p=0.684 n=10+10)
KV/Insert/Native/rows=10000-16    33.9MB ± 1%    34.1MB ± 2%    ~     (p=0.146 n=8+10)
KV/Update/Native/rows=1000-16     4.57MB ± 1%    4.56MB ± 1%    ~     (p=0.123 n=10+10)
KV/Update/Native/rows=10000-16    64.5MB ± 1%    64.7MB ± 2%    ~     (p=0.739 n=10+10)
KV/Delete/Native/rows=100-16       241kB ± 0%     240kB ± 1%    ~     (p=0.143 n=10+10)
KV/Delete/Native/rows=1000-16     2.22MB ± 1%    2.21MB ± 1%    ~     (p=0.113 n=9+10)
KV/Delete/Native/rows=10000-16    30.0MB ± 2%    30.1MB ± 1%    ~     (p=0.165 n=10+10)

name                            old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16            52.0 ± 0%      48.0 ± 0%  -7.69%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10-16           56.0 ± 0%      52.0 ± 0%  -7.14%  (p=0.000 n=10+9)
KV/Scan/Native/rows=100-16          60.0 ± 0%      56.0 ± 0%  -6.67%  (p=0.000 n=10+10)
KV/Scan/Native/rows=1000-16         69.0 ± 0%      65.0 ± 0%  -5.80%  (p=0.000 n=9+9)
KV/Update/Native/rows=1-16           181 ± 0%       173 ± 0%  -4.42%  (p=0.000 n=10+10)
KV/Scan/Native/rows=10000-16        96.3 ± 1%      92.4 ± 4%  -4.00%  (p=0.000 n=8+10)
KV/Delete/Native/rows=1-16           116 ± 0%       112 ± 0%  -3.45%  (p=0.000 n=10+9)
KV/Insert/Native/rows=1-16           117 ± 0%       113 ± 0%  -3.42%  (p=0.001 n=8+9)
KV/Update/Native/rows=10-16          442 ± 0%       434 ± 0%  -1.76%  (p=0.000 n=8+9)
KV/Delete/Native/rows=10-16          235 ± 0%       231 ± 0%  -1.70%  (p=0.000 n=9+10)
KV/Insert/Native/rows=10-16          264 ± 0%       260 ± 0%  -1.52%  (p=0.000 n=10+9)
KV/Delete/Native/rows=100-16       1.26k ± 0%     1.25k ± 0%  -0.33%  (p=0.000 n=7+10)
KV/Update/Native/rows=100-16       2.65k ± 0%     2.64k ± 0%  -0.32%  (p=0.000 n=10+10)
KV/Insert/Native/rows=100-16       1.56k ± 0%     1.55k ± 0%  -0.28%  (p=0.000 n=10+6)
KV/Update/Native/rows=1000-16      25.9k ± 0%     25.9k ± 0%  -0.05%  (p=0.001 n=10+10)
KV/Delete/Native/rows=1000-16      11.3k ± 0%     11.3k ± 0%  -0.04%  (p=0.012 n=10+9)
KV/Insert/Native/rows=1000-16      14.3k ± 0%     14.3k ± 0%  -0.03%  (p=0.030 n=10+10)
KV/Insert/Native/rows=10000-16      141k ± 0%      141k ± 0%    ~     (p=0.839 n=10+10)
KV/Update/Native/rows=10000-16      263k ± 0%      262k ± 0%    ~     (p=0.101 n=10+10)
KV/Delete/Native/rows=10000-16      111k ± 0%      111k ± 0%    ~     (p=0.382 n=10+10)
```

Co-authored-by: Nathan VanBenschoten <[email protected]>
@craig
Copy link
Contributor

craig bot commented Jun 25, 2021

Build failed:

For request's that only contain a single "part", we avoid the heap
allocation in `DistSender.Send`.
This commit switches the handling of the RangeDescriptor and Lease
fields in EvictionToken to be stored by reference, pointing into an
immutable `CacheEntry` object instead of copying and storing these
fields by value. This change is made because storing the fields by value
was causing the object to escape to the heap all over the place. It was
too easy to Call `Desc()` or `Leaseholder()` and then have this
reference escape, which meant that we were allocating a single
`EvictionToken` multiple times throughout the DistSender stack.

A secondary benefit of this change is that it removes memory copies and
shrinks the size of the EvictionToken from 224 bytes to 40 bytes.
@nvanbenschoten nvanbenschoten force-pushed the nvanbenschoten/distSenderOptimizations branch from b1d702c to 8d0b7a2 Compare June 28, 2021 20:28
@nvanbenschoten
Copy link
Member Author

bors r+

@craig
Copy link
Contributor

craig bot commented Jun 29, 2021

Build succeeded:

@craig craig bot merged commit a139637 into cockroachdb:master Jun 29, 2021
@nvanbenschoten nvanbenschoten deleted the nvanbenschoten/distSenderOptimizations branch June 29, 2021 19:21
craig bot pushed a commit that referenced this pull request Jul 21, 2021
66927: kvcoord: setting to reject txns above lock span limit r=andreimatei a=andreimatei

This patch introduces kv.transaction.reject_over_max_intents_budget. If
set, this changes our behavior when a txn exceeds its locks+in-flight
write budget (kv.transaction.max_intents_bytes): instead of compacting
some of its lock spans with precision loss, the request causing the
budget to be exceeded will be rejected instead.

The idea is that we've seen transactions that exceed this budget be very
expensive to clean up - they have to scan a lot to find their intents,
and these cleanups take wide latches. So now one has the option to
reject these transactions, instead of risking this performance cliff.

Each request is checked against the budget by the pipeliner before being
sent out for evaluation. This check is not precise, since the exact
effects of the request on the memory budget are only known at response
time because of ResumeSpans, effects of QueryIntents, etc. So, the check
is best-effort. If a slips through and then the response overflows the
budget, we keep the locks non-condensed; if a further request in the txn
tries to lock more, it'll be rejected. A commit/rollback is
always allowed to pass through, since it doesn't lock anything by
itself.

Fixes #66742

Release note (general change): A new cluster setting
(kv.transaction.reject_over_max_intents_budget) affords control over the
behavior when a transaction exceeds its "locks-tracking memory budget"
(dictated by kv.transaction.max_intents_bytes). Instead of allowing such
transaction to continue with imprecise tracking of their locks, setting
this new option rejects the query that would push its transaction over
this budget with an error (error code 53400 - "configuration limit
exceeded). Transactions that don't track their locks precisely are
potentially destabilizing for the cluster since cleaning them up can
take considerable resources. Transactions that change many rows have the
potential to run into this memory budget issue.

67444: rangecache: add a gcassert:noescape r=jordanlewis a=jordanlewis

#66374 made some changes to the rangecache to avoid allocations.

https://github.com/jordanlewis/gcassert just learned the `//gcassert:noescape` annotation, so upgrade the library, add the annotation to one of the spots that we don't want to escape, and add the rangecache package to the list of packages checked with gcassert.

Co-authored-by: Andrei Matei <[email protected]>
Co-authored-by: Jordan Lewis <[email protected]>
miretskiy pushed a commit to miretskiy/cockroach that referenced this pull request Jan 6, 2022
Treat nil lease as empty when checking for empty lease.
Recent changes in cockroachdb#66374
as well as in cockroachdb#74315 changed
how lease is stored as well as modified receiver type.  As a result,
attempts to print empty `EvictionToken{}` will result in a nil panic.

Release Notes: None
craig bot pushed a commit that referenced this pull request Jan 7, 2022
73395: changefeedccl: Explicitly set min_resolved_frequency. r=miretskiy a=miretskiy

Previously, the `resolved` was used to control the frequency of
of aggregators emitting checkpoint information to the frontier.
Now, this frequency is controlled via `min_checkpoint_frequency`.
Set checkpoint frequency to 10s, so that checkpoints are emitted
more frequently.  The default of 30s causes latency verifier
to fail at 1 minute since it's now very easy to simply miss
sending frontier updates (so latency can go up to almost 1 minute),
plus the closed timestamp setting of 10 second can exceed
required thresholds.

Fixes #73295
Fixes #73294
Fixes #72806

Release Notes: none

74552: kv: Allow nil when checking for empty lease. r=knz a=miretskiy

Treat nil lease as empty when checking for empty lease.
Recent changes in #66374
as well as in #74315 changed
how lease is stored as well as modified receiver type.  As a result,
attempts to print empty `EvictionToken{}` will result in a nil panic.

Release Notes: None

Co-authored-by: Yevgeniy Miretskiy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants