-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DEPS: upgrade grpc to v1.68.0 #136278
DEPS: upgrade grpc to v1.68.0 #136278
Conversation
142b2a1
to
27e5b9d
Compare
re: the The most self-contained way to see this is
These checks were introduced in grpc/grpc-go#7184. If we start n3 with
Now, due to mixed version constraints, we definitely have to bypass this check. We can't reliably inject an env var in our customer environments, and we can't mutate What I don't understand why the old release isn't getting this right. After all, we're using gRPC, and gRPC fills this in by default - both now and back in v1.56.3. The way this should work is as follows. Assume we're the "master" binary, i.e. cockroach with gRPC 1.56.3. Our gRPC server has its credentials defined here: if !rpcCtx.ContextOptions.Insecure {
tlsConfig, err := rpcCtx.GetServerTLSConfig()
if err != nil {
return nil, sii, err
}
grpcOpts = append(grpcOpts, grpc.Creds(credentials.NewTLS(tlsConfig)))
} which calls into this // NewTLS uses c to construct a TransportCredentials based on TLS.
func NewTLS(c *tls.Config) TransportCredentials {
tc := &tlsCreds{credinternal.CloneTLSConfig(c)}
tc.config.NextProtos = credinternal.AppendH2ToNextProtos(tc.config.NextProtos)
return tc
} note the I'll need to poke at this some more. |
CRDB at v1.68.0 fails to communicate with CRDB at v1.56.3 due to this check. This is strange, since CRDB uses gRPC throughout, and in both v1.56.3 and v1.68.0 uses `tls.NewConfig` which ensures that `NextProtos` always contains `h2`. gRPC introduced this check in grpc#7535. See: cockroachdb/cockroach#136278 (comment)
This ALPN thing is sorted out, see #136367. Seems benign - we just need to make sure we keep that check disabled for long enough to not have to interop with versions of CRDB that precede this PR. |
re: the memory blow-up seen here, I think I have a handle on this as well. var out mem.BufferSlice
_, err = io.Copy(mem.NewWriter(&out, pool), io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1))
if err != nil { and if buf == nil {
size := 32 * 1024
if l, ok := src.(*LimitedReader); ok && int64(size) > l.N {
if l.N < 1 {
size = 1
} else {
size = int(l.N)
}
}
buf = make([]byte, size)
} There should be ways to avoid that, now that we're switching to a fork of gRPC already anyway, but I assume there'll be interest to fix this upstream, too. |
I added a patch (
|
This benchmarks simple unary requests across a gRPC server with CockroachDB-specific settings (snappy compression, etc).
Use an existing test service that also does streaming-streaming, which allows the benchmark to highlight the overhead of unary RPCs as implemented by gRPC.
These will fire after the gRPC bump in the subsequent commits.
from v1.56.3 to v1.68.0. TODO: changelog
Similar but different error: ``` link: package conflict error: google.golang.org/genproto/googleapis/cloud/location: package imports google.golang.org/genproto/googleapis/api/annotations was compiled with: @org_golang_google_genproto//googleapis/api/annotations:annotations but was linked with: @org_golang_google_genproto_googleapis_api//annotations:annotations ``` Compare with the original: ``` link: package conflict error: cloud.google.com/go/pubsub/apiv1/pubsubpb: package imports google.golang.org/genproto/googleapis/api/annotations was compiled with: @org_golang_google_genproto_googleapis_api//annotations:annotations but was linked with: @org_golang_google_genproto//googleapis/api/annotations:annotations ```
Error unchanged from last commit. I did remember to `./dev gen bazel`. ``` link: package conflict error: google.golang.org/genproto/googleapis/cloud/location: package imports google.golang.org/genproto/googleapis/api/annotations was compiled with: @org_golang_google_genproto//googleapis/api/annotations:annotations but was linked with: @org_golang_google_genproto_googleapis_api//annotations:annotations ```
…f68ea54 See googleapis/go-genproto#1015. Sadly grpc-gateway is incompatible with this version of `genproto`: ``` ERROR: no such package '@@org_golang_google_genproto//googleapis/api/httpbody': BUILD file not found in directory 'googleapis/api/httpbody' of external repository @@org_golang_google_genproto. Add a BUILD file to a directory to mark it as a package. ERROR: /private/var/tmp/_bazel_tbg/b1346cddcc70d57afdaa90f7f09f9b2c/external/com_github_grpc_ecosystem_grpc_gateway/runtime/BUILD.bazel:5:11: no such package '@@org_golang_google_genproto//googleapis/api/httpbody': BUILD file not found in directory 'googleapis/api/httpbody' of external repository @@org_golang_google_genproto. Add a BUILD file to a directory to mark it as a package. and referenced by '@@com_github_grpc_ecosystem_grpc_gateway//runtime:go_default_library' ``` Since we are on the final `grpc-gateway/v1` version already[^1], we'll have to make the leap to v2 to fix this. [^1]: https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v1.16.0
This also adds the bytestream and rpc packages from genproto which are required. Epic: none Release note: None
This works around the issue discussed in cockroachdb#136367. Apparently, we have some misconfiguration or bug in gRPC v1.56.3 that makes the gRPC server seem unable to properly support HTTP2. This effectively breaks communication between CRDB nodes at these two different gRPC versions. Switch to a fork that disables the check (there is no other way to disable it other than changing code).
Replacements unconditionally replace the dependency, so it's nice to point out that whatever the version in go.mod claims is not actually what is being used.
This is less efficient since it needs to scan the entire reader but our hand is forced by gRPC changing away from a byte slice.
See https://github.com/cockroachdb/grpc-go/releases/tag/v1.68.0-noalpncheck%2Bdecompsize: Re-instate (Decompressor).DecompressedSize optimization This is a) for parity with how gRPC v1.56.3 worked. But also it showed up as a giant regression in allocations, as we were now going through `io.Copy` which allocates a temporary 32KiB buffer. Our payloads are often much smaller.
cbe8f5b
to
1301656
Compare
Drive by: I gather that upgrading gRPC to 1.68 is no longer happening, but consider upgrading just a little bit to 1.57 in order to pick up grpc/grpc-go#6319 (*). That patch bumps the (*) Or maybe you can merge your e3b137c and perhaps you don't need to upgrade grpc at all. |
@andreimatei thanks, I'm giving it a try over here: #138283 (comment) Predictably runs into some dep problem, but hoping we can sort it out (& that 1.57 is not significantly slower). I'm closing this for now. |
137916: roachprod: reduce start-up lag r=tbg a=tbg On my machine (which is in Europe), this brings `time roachprod --help` from `1.56s` down to to `0.06s` under the following env vars: ``` ROACHPROD_DISABLE_UPDATE_CHECK=true ROACHPROD_DISABLED_PROVIDERS=azure ROACHPROD_SKIP_AWSCLI_CHECK=true ``` Under these env vars, my roachprod - no longer invokes `aws --version` on each start (python, ~400ms) - no longer inits azure, which is >1s for me - doesn't list the gs bucket to check for a newer roachprod binary (~800ms; doesn't exist for OSX anyway). A better way (but one outside of my purview) for most of these would be to add caching for each of these and so to avoid the cost in the common case. Azure is an exception, as the (wall-clock) profile below shows we're spending most of our time waiting for `GetTokenFromCLIWithParams` to return. It's not clear how to optimize this. (The AWS portion of the flamegraph is `aws --version`). ![image](https://github.com/user-attachments/assets/b4677da6-c5a5-4552-b0d5-462932f1062e) Epic: none 138283: DEPS: upgrade grpc to v1.57.2 r=tbg a=tbg See #136278 (comment). `grpc` has gotten a little worse at allocations, but it's overall similarly fast, perhaps even a little faster in the smaller RPCs we care most about. <details><summary>Benchmark results</summary> <p> ``` $ benchdiff --old lastmerge ./pkg/rpc -b -r 'BenchmarkGRPCPing' -d 1s -c 10 old: 3ce8f44 Merge #138561 #138779 #138793 new: 3708ee5 DEPS: add resolve hints and update packages name old time/op new time/op delta GRPCPing/bytes=____256/rpc=UnaryUnary-24 126µs ± 3% 124µs ± 2% -1.59% (p=0.035 n=9+10) GRPCPing/bytes=___8192/rpc=StreamStream-24 126µs ± 3% 124µs ± 1% -1.32% (p=0.011 n=10+10) GRPCPing/bytes=______1/rpc=UnaryUnary-24 124µs ± 4% 123µs ± 3% ~ (p=0.315 n=10+10) GRPCPing/bytes=______1/rpc=StreamStream-24 70.3µs ± 3% 70.8µs ± 2% ~ (p=0.393 n=10+10) GRPCPing/bytes=____256/rpc=StreamStream-24 74.5µs ± 3% 75.1µs ± 2% ~ (p=0.105 n=10+10) GRPCPing/bytes=___1024/rpc=UnaryUnary-24 123µs ± 6% 120µs ± 4% ~ (p=0.661 n=10+9) GRPCPing/bytes=___1024/rpc=StreamStream-24 67.4µs ± 8% 67.4µs ± 6% ~ (p=0.720 n=10+9) GRPCPing/bytes=___2048/rpc=UnaryUnary-24 133µs ± 5% 133µs ± 4% ~ (p=0.986 n=10+10) GRPCPing/bytes=___2048/rpc=StreamStream-24 73.9µs ± 1% 74.6µs ± 2% ~ (p=0.234 n=8+8) GRPCPing/bytes=___4096/rpc=UnaryUnary-24 150µs ± 2% 151µs ± 3% ~ (p=0.182 n=9+10) GRPCPing/bytes=___4096/rpc=StreamStream-24 97.4µs ±10% 95.3µs ±10% ~ (p=0.393 n=10+10) GRPCPing/bytes=___8192/rpc=UnaryUnary-24 175µs ± 1% 176µs ± 2% ~ (p=0.720 n=9+10) GRPCPing/bytes=__16384/rpc=UnaryUnary-24 252µs ± 1% 253µs ± 1% ~ (p=0.315 n=9+10) GRPCPing/bytes=__16384/rpc=StreamStream-24 190µs ± 1% 189µs ± 2% ~ (p=0.497 n=9+10) GRPCPing/bytes=__32768/rpc=UnaryUnary-24 363µs ± 1% 366µs ± 1% ~ (p=0.079 n=10+9) GRPCPing/bytes=__32768/rpc=StreamStream-24 305µs ± 3% 305µs ± 1% ~ (p=0.579 n=10+10) GRPCPing/bytes=__65536/rpc=UnaryUnary-24 512µs ± 2% 515µs ± 1% ~ (p=0.095 n=9+10) GRPCPing/bytes=__65536/rpc=StreamStream-24 449µs ± 1% 452µs ± 1% ~ (p=0.059 n=9+8) GRPCPing/bytes=_262144/rpc=UnaryUnary-24 1.48ms ± 3% 1.48ms ± 2% ~ (p=0.739 n=10+10) GRPCPing/bytes=_262144/rpc=StreamStream-24 1.42ms ± 1% 1.41ms ± 2% ~ (p=0.182 n=9+10) GRPCPing/bytes=1048576/rpc=UnaryUnary-24 5.90ms ± 2% 5.86ms ± 1% ~ (p=0.278 n=10+9) GRPCPing/bytes=1048576/rpc=StreamStream-24 5.81ms ± 2% 5.84ms ± 3% ~ (p=0.631 n=10+10) name old speed new speed delta GRPCPing/bytes=____256/rpc=UnaryUnary-24 4.44MB/s ± 3% 4.51MB/s ± 2% +1.58% (p=0.033 n=9+10) GRPCPing/bytes=___8192/rpc=StreamStream-24 130MB/s ± 3% 132MB/s ± 1% +1.32% (p=0.010 n=10+10) GRPCPing/bytes=______1/rpc=UnaryUnary-24 386kB/s ± 4% 391kB/s ± 3% ~ (p=0.378 n=10+10) GRPCPing/bytes=______1/rpc=StreamStream-24 682kB/s ± 3% 676kB/s ± 2% ~ (p=0.189 n=10+9) GRPCPing/bytes=____256/rpc=StreamStream-24 7.52MB/s ± 3% 7.46MB/s ± 2% ~ (p=0.100 n=10+10) GRPCPing/bytes=___1024/rpc=UnaryUnary-24 17.1MB/s ± 6% 17.4MB/s ± 4% ~ (p=0.645 n=10+9) GRPCPing/bytes=___1024/rpc=StreamStream-24 31.1MB/s ± 8% 31.1MB/s ± 6% ~ (p=0.720 n=10+9) GRPCPing/bytes=___2048/rpc=UnaryUnary-24 31.1MB/s ± 5% 31.2MB/s ± 4% ~ (p=0.986 n=10+10) GRPCPing/bytes=___2048/rpc=StreamStream-24 56.1MB/s ± 1% 55.6MB/s ± 2% ~ (p=0.224 n=8+8) GRPCPing/bytes=___4096/rpc=UnaryUnary-24 55.1MB/s ± 2% 54.6MB/s ± 3% ~ (p=0.189 n=9+10) GRPCPing/bytes=___4096/rpc=StreamStream-24 85.1MB/s ±11% 87.0MB/s ±11% ~ (p=0.393 n=10+10) GRPCPing/bytes=___8192/rpc=UnaryUnary-24 93.7MB/s ± 1% 93.5MB/s ± 2% ~ (p=0.720 n=9+10) GRPCPing/bytes=__16384/rpc=UnaryUnary-24 130MB/s ± 1% 130MB/s ± 1% ~ (p=0.305 n=9+10) GRPCPing/bytes=__16384/rpc=StreamStream-24 173MB/s ± 1% 173MB/s ± 2% ~ (p=0.497 n=9+10) GRPCPing/bytes=__32768/rpc=UnaryUnary-24 180MB/s ± 1% 179MB/s ± 1% ~ (p=0.079 n=10+9) GRPCPing/bytes=__32768/rpc=StreamStream-24 215MB/s ± 2% 215MB/s ± 1% ~ (p=0.579 n=10+10) GRPCPing/bytes=__65536/rpc=UnaryUnary-24 256MB/s ± 2% 255MB/s ± 1% ~ (p=0.095 n=9+10) GRPCPing/bytes=__65536/rpc=StreamStream-24 292MB/s ± 1% 290MB/s ± 1% ~ (p=0.059 n=9+8) GRPCPing/bytes=_262144/rpc=UnaryUnary-24 353MB/s ± 3% 353MB/s ± 2% ~ (p=0.447 n=10+9) GRPCPing/bytes=_262144/rpc=StreamStream-24 369MB/s ± 1% 371MB/s ± 2% ~ (p=0.182 n=9+10) GRPCPing/bytes=1048576/rpc=UnaryUnary-24 355MB/s ± 2% 358MB/s ± 1% ~ (p=0.278 n=10+9) GRPCPing/bytes=1048576/rpc=StreamStream-24 361MB/s ± 2% 359MB/s ± 3% ~ (p=0.631 n=10+10) name old alloc/op new alloc/op delta GRPCPing/bytes=______1/rpc=UnaryUnary-24 16.9kB ± 1% 16.9kB ± 3% ~ (p=0.579 n=10+10) GRPCPing/bytes=____256/rpc=UnaryUnary-24 19.8kB ± 2% 19.9kB ± 2% ~ (p=0.755 n=10+10) GRPCPing/bytes=____256/rpc=StreamStream-24 7.35kB ± 2% 7.43kB ± 2% ~ (p=0.052 n=10+10) GRPCPing/bytes=___1024/rpc=UnaryUnary-24 29.8kB ± 2% 29.8kB ± 1% ~ (p=0.853 n=10+10) GRPCPing/bytes=___1024/rpc=StreamStream-24 17.7kB ± 1% 17.7kB ± 1% ~ (p=0.796 n=10+10) GRPCPing/bytes=___2048/rpc=UnaryUnary-24 43.2kB ± 1% 43.0kB ± 1% ~ (p=0.218 n=10+10) GRPCPing/bytes=___2048/rpc=StreamStream-24 31.0kB ± 0% 31.1kB ± 1% ~ (p=0.278 n=9+10) GRPCPing/bytes=___4096/rpc=UnaryUnary-24 73.0kB ± 1% 73.2kB ± 1% ~ (p=0.393 n=10+10) GRPCPing/bytes=___4096/rpc=StreamStream-24 61.6kB ± 1% 61.7kB ± 0% ~ (p=0.573 n=10+8) GRPCPing/bytes=___8192/rpc=UnaryUnary-24 127kB ± 0% 127kB ± 1% ~ (p=0.393 n=10+10) GRPCPing/bytes=___8192/rpc=StreamStream-24 118kB ± 1% 118kB ± 0% ~ (p=0.796 n=10+10) GRPCPing/bytes=__16384/rpc=UnaryUnary-24 237kB ± 1% 237kB ± 1% ~ (p=0.579 n=10+10) GRPCPing/bytes=__16384/rpc=StreamStream-24 227kB ± 1% 227kB ± 1% ~ (p=0.481 n=10+10) GRPCPing/bytes=__32768/rpc=UnaryUnary-24 500kB ± 1% 500kB ± 1% ~ (p=0.912 n=10+10) GRPCPing/bytes=__32768/rpc=StreamStream-24 492kB ± 0% 492kB ± 0% ~ (p=0.968 n=9+10) GRPCPing/bytes=__65536/rpc=UnaryUnary-24 873kB ± 0% 872kB ± 0% ~ (p=0.780 n=9+10) GRPCPing/bytes=__65536/rpc=StreamStream-24 868kB ± 0% 868kB ± 0% ~ (p=1.000 n=9+9) GRPCPing/bytes=_262144/rpc=UnaryUnary-24 3.50MB ± 0% 3.51MB ± 0% ~ (p=0.436 n=10+10) GRPCPing/bytes=_262144/rpc=StreamStream-24 3.49MB ± 0% 3.50MB ± 0% ~ (p=0.436 n=10+10) GRPCPing/bytes=1048576/rpc=UnaryUnary-24 13.5MB ± 0% 13.5MB ± 0% ~ (p=0.515 n=8+10) GRPCPing/bytes=1048576/rpc=StreamStream-24 13.5MB ± 0% 13.5MB ± 0% ~ (p=0.549 n=10+9) GRPCPing/bytes=______1/rpc=StreamStream-24 4.08kB ± 3% 4.18kB ± 3% +2.28% (p=0.008 n=9+10) name old allocs/op new allocs/op delta GRPCPing/bytes=_262144/rpc=UnaryUnary-24 282 ± 4% 286 ± 4% ~ (p=0.223 n=10+10) GRPCPing/bytes=_262144/rpc=StreamStream-24 147 ± 3% 149 ± 3% ~ (p=0.053 n=9+8) GRPCPing/bytes=1048576/rpc=UnaryUnary-24 510 ± 2% 513 ± 3% ~ (p=0.656 n=8+9) GRPCPing/bytes=1048576/rpc=StreamStream-24 370 ± 6% 377 ± 3% ~ (p=0.168 n=9+9) GRPCPing/bytes=____256/rpc=UnaryUnary-24 183 ± 0% 184 ± 0% +0.71% (p=0.000 n=8+10) GRPCPing/bytes=______1/rpc=UnaryUnary-24 183 ± 0% 184 ± 0% +0.77% (p=0.000 n=10+8) GRPCPing/bytes=__32768/rpc=UnaryUnary-24 211 ± 0% 213 ± 0% +0.95% (p=0.000 n=10+10) GRPCPing/bytes=__16384/rpc=UnaryUnary-24 195 ± 0% 197 ± 0% +1.03% (p=0.000 n=10+10) GRPCPing/bytes=___8192/rpc=UnaryUnary-24 184 ± 0% 186 ± 0% +1.09% (p=0.000 n=10+10) GRPCPing/bytes=___2048/rpc=UnaryUnary-24 183 ± 0% 185 ± 0% +1.09% (p=0.000 n=10+10) GRPCPing/bytes=___4096/rpc=UnaryUnary-24 183 ± 0% 185 ± 0% +1.09% (p=0.000 n=10+10) GRPCPing/bytes=___1024/rpc=UnaryUnary-24 182 ± 0% 184 ± 0% +1.10% (p=0.000 n=10+10) GRPCPing/bytes=__65536/rpc=UnaryUnary-24 219 ± 0% 221 ± 0% +1.10% (p=0.000 n=10+8) GRPCPing/bytes=__32768/rpc=StreamStream-24 75.0 ± 0% 77.0 ± 0% +2.67% (p=0.000 n=10+10) GRPCPing/bytes=__65536/rpc=StreamStream-24 83.0 ± 0% 85.3 ± 1% +2.77% (p=0.000 n=9+10) GRPCPing/bytes=__16384/rpc=StreamStream-24 57.0 ± 0% 59.0 ± 0% +3.51% (p=0.000 n=10+10) GRPCPing/bytes=___8192/rpc=StreamStream-24 51.0 ± 0% 53.0 ± 0% +3.92% (p=0.000 n=10+10) GRPCPing/bytes=___4096/rpc=StreamStream-24 49.0 ± 0% 51.0 ± 0% +4.08% (p=0.000 n=10+10) GRPCPing/bytes=___2048/rpc=StreamStream-24 48.0 ± 0% 50.0 ± 0% +4.17% (p=0.000 n=10+10) GRPCPing/bytes=______1/rpc=StreamStream-24 47.0 ± 0% 49.0 ± 0% +4.26% (p=0.000 n=10+10) GRPCPing/bytes=____256/rpc=StreamStream-24 47.0 ± 0% 49.0 ± 0% +4.26% (p=0.000 n=10+10) GRPCPing/bytes=___1024/rpc=StreamStream-24 47.0 ± 0% 49.0 ± 0% +4.26% (p=0.000 n=10+10) ``` </p> </details> Epic: None Release note: None 138939: changefeedccl/kvfeed: pass consumer id correctly r=andyyang890,stevendanna a=wenyihu6 Previously, we introduced the concept of a consumer ID to prevent a single changefeed job from over-consuming the catch-up scan quota and blocking other consumers from making progress on the server side. However, the changefeed client-side code requires the consumer ID to be passed again in the rangefeed options during rangefeedFactory.Run. This was missing in the previous PR, causing the changefeed job ID to default to zero. This patch fixes the issue by ensuring the consumer ID is correctly passed in the rangefeed options. Related: #133789 Release note: none Epic: none Co-authored-by: Tobias Grieger <[email protected]> Co-authored-by: Wenyi Hu <[email protected]>
TODO:
DecompressedSize
no longer being used (ref)From v1.56.3 to v1.68.0. Full commit list below1 created by script2.
The main benefit of the upgrade is that we can benefit from some great work they did to reuse memory (i.e. reduce memory pressure). Part of this is internal, but one significant new piece is exposed through the
CodecV2
interface, we currently implement currently aCodecV0
. They also moved to protov2 internally, so their defaultCodecV2
would be slow for us (due to the need to "dress up" the protov1 messages as v2, if it would work at all).Details
Here's where their default new codec transforms protov1 messages:
/encoding/proto/proto.go#L69-L70
gRPC internally operates on protoV2 messages. We would hand it v1 messages, meaning
messageV2Of
will call this code which wraps the message in an interface:I think there's reflection involved. This is probably not efficient. This conversion is new as of grpc/grpc-go#6965, shortly after gRPC moved to protoV2 internally.
Here's what CodecV2 looks like.
/encoding/encoding_v2.go#L31-L39
Note the
mem.BufferSlice
. So basically, whatever the implementation (that's our code) needs to allocate, it can put it into amem.BufferSlice
, which one can get like this:/encoding/proto/proto.go#L57-L58
then when gRPC is done with the buffer, it will release it back into the pool. This seems pretty nice, but it means that our actual proto unmarshaling code (rather, gogoprotos) would need to plumb down this buffer pool. In their default v2 codec, they run into this same issue with google-protobuf:
/encoding/proto/proto.go#L75-L80
Our codec is here (a legacy V1 codec):
/pkg/rpc/codec.go#L23-L42
Perf-related commits3 below.
mem
package to facilitate memory reuse grpc/grpc-go#7432RecvBufferPool
deactivation issues grpc/grpc-go#6766mem.BufferSlice
instead of[]byte
grpc/grpc-go#7356pretty.ToJSON
and move code outside of lock grpc/grpc-go#7132Closes #134971.
Epic: CRDB-43584
Footnotes
https://github.com/cockroachdb/cockroach/pull/136278#issuecomment-2503756022 ↩
https://gist.github.com/tbg/c518ba3844f94abf4fff826f13be5300 ↩
git log ^v1.56.3 v1.68.0 --grep pool --grep reuse --grep memory --grep perf --grep alloc --grep bench --oneline | sed -e 's/#/grpc\/grpc-go#/g' | sed -E -e 's~^([0-9a-f]+)~\[\1\](https://github.com/grpc/grpc-go/commit/\1)~' | sed -e 's/^/- /'
and also read the release notes for all intermediate releases. ↩