Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…76591

75809: [CRDB-12226] server, ui: display circuit breakers in problem ranges and range status r=Santamaura a=Santamaura

This PR adds changes to the reports/problemranges and reports/range pages.
Ranges with replicas that have a circuit breaker will show up as problem ranges and
the circuit breaker error will show up as a row on the status page.

Release note (ui change): display circuit breakers in problems ranges and range status

Problem Ranges page:
![Screen Shot 2022-02-08 at 4 57 51 PM](https://user-images.githubusercontent.com/17861665/153082648-6c03d195-e395-456a-be00-55ad24863752.png)

Range status page:
![Screen Shot 2022-02-08 at 4 57 34 PM](https://user-images.githubusercontent.com/17861665/153082705-cbbe5507-e81d-49d7-a3f7-21b4c84226c2.png)



76278: Add cluster version as feature gate for block properties r=dt,erikgrinaker,jbowens a=nicktrav

Two commits here - the first adds a new cluster version, the second makes use of the cluster version as a feature gate (and update various call sites all over the place).

---

pkg/clusterversion: add new version as feature gate for block properties
Prior to this change, the block properties SSTable-level feature was
enabled in a single cluster version. This introduced a subtle race in
that for the period in which each node is being updated to
`PebbleFormatBlockPropertyCollector`, there is a brief period where not
all nodes are at the same cluster version, and thus store versions. If
nodes at the newer version write SSTables that make use of block
properties, and these tables are consumed by nodes that are yet to be
updated, the older nodes could panic when attempting to read the tables
with a format they do not yet understand.

While this race is academic, given that a) there are now subsequent
cluster versions that act as barriers during the migration, and b) block
properties were disabled in 1a8fb57, this patch addresses the race by
adding a second cluster version.
`EnablePebbleFormatVersionBlockProperties` acts as a barrier and a
feature gate. A guarantee of the migration framework is that any node at
this newer version is part of a cluster that has already run the
necessary migrations for the older version, and thus ratcheted the
format major version in the store, and thus enabled the block properties
feature, across all nodes.

Add additional documentation in `pebble.go` that details how to make use
of the two-version pattern for future table-level version changes.

---

pkg/storage: make MakeIngestionWriterOptions version aware
With the introduction of block properties, and soon, range keys, which
introduce backward-incompatible changes at the SSTable level, all nodes
in a cluster must all have a sufficient store version in order to avoid
runtime incompatibilities.

Update `storage.MakeIngestionWriterOptions` to add a `context.Context`
and `cluster.Settings` as parameters, which allows for determining
whether a given cluster version is active (via
`(clusterversion.Handle).IsActive()`). This allows gating the enabling /
disabling of block properties (and soon, range keys), on all nodes being
at a sufficient cluster version.

Update various call-sites to pass in the `context.Context` and
`cluster.Settings`.

---

76348: ui: downsample SQL transaction metrics using MAX r=maryliag a=dhartunian

Previously, we were using the default downsampling behavior of the
timeseries query engine for "Open SQL Transactions" and "Active SQL
Statements"  on the metrics page in DB console. This led to confusion
when zooming in on transaction spikes since the spike would get larger
as the zoom got tighter.

This PR changes the aggregation function to use MAX to prevent this
confusion.

Resolves: #71827

Release note (ui change): Open SQL Transactions and Active SQL
Transactions are downsampled using MAX instead of AVG and will more
accurately reflect narrow spikes in transaction counts when looking and
downsampled data.

76414: spanconfig: teach the KVAccessor about system span configurations r=arulajmani a=arulajmani

First 3 commits are from #76219, this one's quite small -- mostly just tests. 

----

This patch teaches the KVAccessor to update and get system span
configurations.

Release note: None

76538: ui: Use liveness info to populate decommissioned node lists r=zachlite a=zachlite

Previously, the decommissioned node lists considered node status entries
to determine decommissioning and decommissioned status. This changed in #56529,
resulting in empty lists. Now, the node's liveness entry is considered
and these lists are correctly populated.

Release note (bug fix): The list of recently decommissioned nodes
and the historical list of decommissioned nodes correctly display
decommissioned nodes.

76544: builtins: add rehome_row to DistSQLBlocklist r=mgartner,otan a=rafiss

fixes #76153

This builtin always needs to run on the gateway node.

Release note: None

76546: build: display pebble git SHA in GitHub messages r=rickystewart a=nicktrav

Use the short from of the Git SHA from the go.mod-style version in
DEPS.bzl as the Pebble commit. This ensures that failure messages
created by Team City link to a GitHub page that renders correctly.

Release note: None

76550: gen/genbzl: general improvements r=ajwerner a=ajwerner

This change does a few things:

 * It reworks the queries in terms of eachother in-memory. This is better than
   the previous iteration whereby it'd generate the results and then rely on
   the output of that query. Instead, we just build up bigger query expressions
   and pass them to bazel using the --query_file flag.
 * It avoids exploring the pkg/ui directory (and the pkg/gen directory) because
   those can cause problems. The pkg/ui directory ends up bringing in npm,
   which hurts.
 * It stops rewriting the files before executing the queries. It no longer
   needs to rewrite them up front because they aren't referenced by later
   queries.
 * It removes the excluded target which was problematic because those files
    weren't properly visible.

Fixes #76521
Fixes #76503

Release note: None

76562: ccl/sqlproxyccl: update PeekMsg to return message size instead of body size r=JeffSwenson a=jaylim-crl

Informs #76000. Follow-up to #76006.

Previously, PeekMsg was returning the body size (excluding header size), which
is a bit awkward from an API point of view because most callers of PeekMsg
immediately adds the header size to the returned size previously. This commit
cleans the API design up by making PeekMsg return the message size instead,
i.e. header inclusive. At the same time, returning the message size makes it
consistent with the ReadMsg API since that returns the entire message.

Release note: None

76591: bazel: update shebang line in `sql-gen.sh` r=rail a=rickystewart

Release note: None

Co-authored-by: Santamaura <[email protected]>
Co-authored-by: Nick Travers <[email protected]>
Co-authored-by: David Hartunian <[email protected]>
Co-authored-by: arulajmani <[email protected]>
Co-authored-by: Zach Lite <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: Jay <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
  • Loading branch information
10 people committed Feb 15, 2022
11 parents b16a845 + 0ffc720 + 6e2a057 + 86008a8 + f78be51 + 7179469 + 389dcef + f142110 + 80b48cf + 2dfd748 + e55fed2 commit 3cb7eb0
Show file tree
Hide file tree
Showing 68 changed files with 961 additions and 864 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,11 @@ bazel run @go_sdk//:bin/go get github.com/cockroachdb/pebble@latest
NEW_DEPS_BZL_CONTENT=$(bazel run //pkg/cmd/mirror)
echo "$NEW_DEPS_BZL_CONTENT" > DEPS.bzl

PEBBLE_SUM=$(grep 'version =' DEPS.bzl | cut -d'"' -f2)
echo "Pebble module sum: $PEBBLE_SUM"
# Use the Pebble SHA from the version in the modified DEPS.bzl file.
# Note that we need to pluck the Git SHA from the go.sum-style version, i.e.
# v0.0.0-20220214174839-6af77d5598c9SUM => 6af77d5598c9
PEBBLE_SHA=$(grep 'version =' DEPS.bzl | cut -d'"' -f2 | cut -d'-' -f3)
echo "Pebble module Git SHA: $PEBBLE_SHA"

BAZEL_SUPPORT_EXTRA_DOCKER_ARGS="-e BUILD_VCS_NUMBER=$PEBBLE_SUM -e GITHUB_API_TOKEN -e GITHUB_REPO -e TC_BUILD_BRANCH -e TC_BUILD_ID -e TC_SERVER_URL" \
BAZEL_SUPPORT_EXTRA_DOCKER_ARGS="-e BUILD_VCS_NUMBER=$PEBBLE_SHA -e GITHUB_API_TOKEN -e GITHUB_REPO -e TC_BUILD_BRANCH -e TC_BUILD_ID -e TC_SERVER_URL" \
run_bazel build/teamcity/cockroach/nightlies/pebble_nightly_metamorphic_impl.sh
4 changes: 4 additions & 0 deletions docs/generated/http/full.md
Original file line number Diff line number Diff line change
Expand Up @@ -1314,6 +1314,7 @@ RangeProblems describes issues reported by a range. For internal use only.
| no_lease | [bool](#cockroach.server.serverpb.RaftDebugResponse-bool) | | | [reserved](#support-status) |
| quiescent_equals_ticking | [bool](#cockroach.server.serverpb.RaftDebugResponse-bool) | | Quiescent ranges do not tick by definition, but we track this in two different ways and suspect that they're getting out of sync. If the replica's quiescent flag doesn't agree with the store's list of replicas that are ticking, warn about it. | [reserved](#support-status) |
| raft_log_too_large | [bool](#cockroach.server.serverpb.RaftDebugResponse-bool) | | When the raft log is too large, it can be a symptom of other issues. | [reserved](#support-status) |
| circuit_breaker_error | [bool](#cockroach.server.serverpb.RaftDebugResponse-bool) | | | [reserved](#support-status) |



Expand Down Expand Up @@ -1520,6 +1521,7 @@ RangeProblems describes issues reported by a range. For internal use only.
| no_lease | [bool](#cockroach.server.serverpb.RangesResponse-bool) | | | [reserved](#support-status) |
| quiescent_equals_ticking | [bool](#cockroach.server.serverpb.RangesResponse-bool) | | Quiescent ranges do not tick by definition, but we track this in two different ways and suspect that they're getting out of sync. If the replica's quiescent flag doesn't agree with the store's list of replicas that are ticking, warn about it. | [reserved](#support-status) |
| raft_log_too_large | [bool](#cockroach.server.serverpb.RangesResponse-bool) | | When the raft log is too large, it can be a symptom of other issues. | [reserved](#support-status) |
| circuit_breaker_error | [bool](#cockroach.server.serverpb.RangesResponse-bool) | | | [reserved](#support-status) |



Expand Down Expand Up @@ -3099,6 +3101,7 @@ Support status: [reserved](#support-status)
| overreplicated_range_ids | [int64](#cockroach.server.serverpb.ProblemRangesResponse-int64) | repeated | | [reserved](#support-status) |
| quiescent_equals_ticking_range_ids | [int64](#cockroach.server.serverpb.ProblemRangesResponse-int64) | repeated | | [reserved](#support-status) |
| raft_log_too_large_range_ids | [int64](#cockroach.server.serverpb.ProblemRangesResponse-int64) | repeated | | [reserved](#support-status) |
| circuit_breaker_error_range_ids | [int64](#cockroach.server.serverpb.ProblemRangesResponse-int64) | repeated | | [reserved](#support-status) |



Expand Down Expand Up @@ -3394,6 +3397,7 @@ RangeProblems describes issues reported by a range. For internal use only.
| no_lease | [bool](#cockroach.server.serverpb.RangeResponse-bool) | | | [reserved](#support-status) |
| quiescent_equals_ticking | [bool](#cockroach.server.serverpb.RangeResponse-bool) | | Quiescent ranges do not tick by definition, but we track this in two different ways and suspect that they're getting out of sync. If the replica's quiescent flag doesn't agree with the store's list of replicas that are ticking, warn about it. | [reserved](#support-status) |
| raft_log_too_large | [bool](#cockroach.server.serverpb.RangeResponse-bool) | | When the raft log is too large, it can be a symptom of other issues. | [reserved](#support-status) |
| circuit_breaker_error | [bool](#cockroach.server.serverpb.RangeResponse-bool) | | | [reserved](#support-status) |



Expand Down
2 changes: 1 addition & 1 deletion docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -176,4 +176,4 @@ trace.debug.enable boolean false if set, traces for recent requests can be seen
trace.jaeger.agent string the address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as <host>:<port>. If no port is specified, 6381 will be used.
trace.opentelemetry.collector string address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.
trace.zipkin.collector string the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.
version version 21.2-62 set the active cluster version in the format '<major>.<minor>'
version version 21.2-64 set the active cluster version in the format '<major>.<minor>'
2 changes: 1 addition & 1 deletion docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,6 @@
<tr><td><code>trace.jaeger.agent</code></td><td>string</td><td><code></code></td><td>the address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as <host>:<port>. If no port is specified, 6381 will be used.</td></tr>
<tr><td><code>trace.opentelemetry.collector</code></td><td>string</td><td><code></code></td><td>address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.</td></tr>
<tr><td><code>trace.zipkin.collector</code></td><td>string</td><td><code></code></td><td>the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>21.2-62</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>21.2-64</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
</tbody>
</table>
4 changes: 4 additions & 0 deletions docs/generated/swagger/spec.json
Original file line number Diff line number Diff line change
Expand Up @@ -1027,6 +1027,10 @@
"type": "object",
"title": "RangeProblems describes issues reported by a range. For internal use only.",
"properties": {
"circuit_breaker_error": {
"type": "boolean",
"x-go-name": "CircuitBreakerError"
},
"leader_not_lease_holder": {
"type": "boolean",
"x-go-name": "LeaderNotLeaseHolder"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func TestBackendInterceptor(t *testing.T) {
typ, size, err := bi.PeekMsg()
require.NoError(t, err)
require.Equal(t, pgwirebase.ClientMsgSimpleQuery, typ)
require.Equal(t, 9, size)
require.Equal(t, 14, size)

bi.Close()
typ, size, err = bi.PeekMsg()
Expand Down
34 changes: 19 additions & 15 deletions pkg/ccl/sqlproxyccl/interceptor/base.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ package interceptor
import (
"encoding/binary"
"io"
"math"

"github.com/cockroachdb/cockroach/pkg/util"
"github.com/cockroachdb/errors"
Expand Down Expand Up @@ -85,9 +86,9 @@ func newPgInterceptor(src io.Reader, dst io.Writer, bufSize int) (*pgInterceptor

// PeekMsg returns the header of the current pgwire message without advancing
// the interceptor. On return, err == nil if and only if the entire header can
// be read. Note that size corresponds to the body size, and does not account
// for the size field itself. This will return ErrProtocolError if the packets
// are malformed.
// be read. The returned size corresponds to the entire message size, which
// includes the header type and body length. This will return ErrProtocolError
// if the packets are malformed.
//
// If the interceptor is closed, PeekMsg returns ErrInterceptorClosed.
func (p *pgInterceptor) PeekMsg() (typ byte, size int, err error) {
Expand All @@ -103,12 +104,16 @@ func (p *pgInterceptor) PeekMsg() (typ byte, size int, err error) {
typ = p.buf[p.readPos]
size = int(binary.BigEndian.Uint32(p.buf[p.readPos+1:]))

// Size has to be at least itself based on pgwire's protocol.
if size < 4 {
// Size has to be at least itself based on pgwire's protocol. Theoretically,
// math.MaxInt32 is valid since the body's length is stored within 4 bytes,
// but we'll just enforce that for simplicity (because we're adding 1 below).
if size < 4 || size >= math.MaxInt32 {
return 0, 0, ErrProtocolError
}

return typ, size - 4, nil
// Add 1 to size to account for type. We don't need to add 4 (int length) to
// it because size is already inclusive of that.
return typ, size + 1, nil
}

// WriteMsg writes the given bytes to the writer dst. If err != nil and a Write
Expand Down Expand Up @@ -148,28 +153,27 @@ func (p *pgInterceptor) ReadMsg() (msg []byte, err error) {
return nil, ErrInterceptorClosed
}

// Peek header of the current message for body size.
// Peek header of the current message for message size.
_, size, err := p.PeekMsg()
if err != nil {
return nil, err
}
msgSizeBytes := pgHeaderSizeBytes + size

// Can the entire message fit into the buffer?
if msgSizeBytes <= len(p.buf) {
if err := p.ensureNextNBytes(msgSizeBytes); err != nil {
if size <= len(p.buf) {
if err := p.ensureNextNBytes(size); err != nil {
// Possibly due to a timeout or context cancellation.
return nil, err
}

// Return a slice to the internal buffer to avoid an allocation here.
retBuf := p.buf[p.readPos : p.readPos+msgSizeBytes]
p.readPos += msgSizeBytes
retBuf := p.buf[p.readPos : p.readPos+size]
p.readPos += size
return retBuf, nil
}

// Message cannot fit, so we will have to allocate.
msg = make([]byte, msgSizeBytes)
msg = make([]byte, size)

// Copy bytes which have already been read.
n := copy(msg, p.buf[p.readPos:p.writePos])
Expand Down Expand Up @@ -209,15 +213,15 @@ func (p *pgInterceptor) ForwardMsg() (n int, err error) {
return 0, ErrInterceptorClosed
}

// Retrieve header of the current message for body size.
// Retrieve header of the current message for message size.
_, size, err := p.PeekMsg()
if err != nil {
return 0, err
}

// Handle overflows as current message may not fit in the current buffer.
startPos := p.readPos
endPos := startPos + pgHeaderSizeBytes + size
endPos := startPos + size
remainingBytes := 0
if endPos > p.writePos {
remainingBytes = endPos - p.writePos
Expand Down
70 changes: 64 additions & 6 deletions pkg/ccl/sqlproxyccl/interceptor/base_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ package interceptor

import (
"bytes"
"encoding/binary"
"io"
"math"
"testing"
"testing/iotest"
"unsafe"
Expand Down Expand Up @@ -78,10 +80,28 @@ func TestPGInterceptor_PeekMsg(t *testing.T) {
require.Equal(t, 0, size)
})

t.Run("protocol error", func(t *testing.T) {
data := make([]byte, 10)
t.Run("protocol error/size=0", func(t *testing.T) {
var data [10]byte

buf := new(bytes.Buffer)
_, err := buf.Write(data[:])
require.NoError(t, err)

pgi, err := newPgInterceptor(buf, nil /* dst */, 10)
require.NoError(t, err)

typ, size, err := pgi.PeekMsg()
require.EqualError(t, err, ErrProtocolError.Error())
require.Equal(t, byte(0), typ)
require.Equal(t, 0, size)
})

t.Run("protocol error/size=3", func(t *testing.T) {
var data [5]byte
binary.BigEndian.PutUint32(data[1:5], uint32(3))

buf := new(bytes.Buffer)
_, err := buf.Write(data)
_, err := buf.Write(data[:])
require.NoError(t, err)

pgi, err := newPgInterceptor(buf, nil /* dst */, 10)
Expand All @@ -93,9 +113,47 @@ func TestPGInterceptor_PeekMsg(t *testing.T) {
require.Equal(t, 0, size)
})

t.Run("protocol error/size=math.MaxInt32", func(t *testing.T) {
var data [5]byte
binary.BigEndian.PutUint32(data[1:5], uint32(math.MaxInt32))

buf := new(bytes.Buffer)
_, err := buf.Write(data[:])
require.NoError(t, err)

pgi, err := newPgInterceptor(buf, nil /* dst */, 10)
require.NoError(t, err)

typ, size, err := pgi.PeekMsg()
require.EqualError(t, err, ErrProtocolError.Error())
require.Equal(t, byte(0), typ)
require.Equal(t, 0, size)
})

t.Run("successful without body", func(t *testing.T) {
// Use 4 bytes to indicate no body.
var data [5]byte
data[0] = 'A'
binary.BigEndian.PutUint32(data[1:5], uint32(4))

buf := new(bytes.Buffer)
_, err := buf.Write(data[:])
require.NoError(t, err)

pgi, err := newPgInterceptor(buf, nil /* dst */, 10)
require.NoError(t, err)

typ, size, err := pgi.PeekMsg()
require.NoError(t, err)
require.Equal(t, byte('A'), typ)
require.Equal(t, 5, size)
require.Equal(t, 0, buf.Len())
})

t.Run("successful", func(t *testing.T) {
buf := new(bytes.Buffer)
_, err := buf.Write((&pgproto3.Query{String: "SELECT 1"}).Encode(nil))
msgBytes := (&pgproto3.Query{String: "SELECT 1"}).Encode(nil)
_, err := buf.Write(msgBytes)
require.NoError(t, err)

pgi, err := newPgInterceptor(buf, nil /* dst */, 10)
Expand All @@ -104,14 +162,14 @@ func TestPGInterceptor_PeekMsg(t *testing.T) {
typ, size, err := pgi.PeekMsg()
require.NoError(t, err)
require.Equal(t, byte(pgwirebase.ClientMsgSimpleQuery), typ)
require.Equal(t, 9, size)
require.Equal(t, len(msgBytes), size)
require.Equal(t, 4, buf.Len())

// Invoking Peek should not advance the interceptor.
typ, size, err = pgi.PeekMsg()
require.NoError(t, err)
require.Equal(t, byte(pgwirebase.ClientMsgSimpleQuery), typ)
require.Equal(t, 9, size)
require.Equal(t, len(msgBytes), size)
require.Equal(t, 4, buf.Len())
})
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func TestFrontendInterceptor(t *testing.T) {
typ, size, err := fi.PeekMsg()
require.NoError(t, err)
require.Equal(t, pgwirebase.ServerMsgReady, typ)
require.Equal(t, 1, size)
require.Equal(t, 6, size)

fi.Close()
typ, size, err = fi.PeekMsg()
Expand Down
14 changes: 14 additions & 0 deletions pkg/clusterversion/cockroach_versions.go
Original file line number Diff line number Diff line change
Expand Up @@ -269,6 +269,16 @@ const (
DontProposeWriteTimestampForLeaseTransfers
// TenantSettingsTable adds the system table for tracking tenant usage.
TenantSettingsTable
// EnablePebbleFormatVersionBlockProperties enables a new Pebble SSTable
// format version for block property collectors.
// NB: this cluster version is paired with PebbleFormatBlockPropertyCollector
// in a two-phase migration. The first cluster version acts as a gate for
// updating the format major version on all stores, while the second cluster
// version is used as a feature gate. A node in a cluster that sees the second
// version is guaranteed to have seen the first version, and therefore has an
// engine running at the required format major version, as do all other nodes
// in the cluster.
EnablePebbleFormatVersionBlockProperties

// *************************************************
// Step (1): Add new versions here.
Expand Down Expand Up @@ -429,6 +439,10 @@ var versionsSingleton = keyedVersions{
Key: TenantSettingsTable,
Version: roachpb.Version{Major: 21, Minor: 2, Internal: 62},
},
{
Key: EnablePebbleFormatVersionBlockProperties,
Version: roachpb.Version{Major: 21, Minor: 2, Internal: 64},
},

// *************************************************
// Step (2): Add new versions here.
Expand Down
5 changes: 3 additions & 2 deletions pkg/clusterversion/key_string.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

13 changes: 1 addition & 12 deletions pkg/gen/BUILD.bazel
Original file line number Diff line number Diff line change
@@ -1,15 +1,4 @@
load(":gen.bzl", "EXPLICIT_SRCS", "docs", "execgen", "gen", "go_proto", "gomock", "misc", "optgen", "stringer")
load(":excluded.bzl", "EXCLUDED_SRCS")

filegroup(
name = "explicitly_generated",
srcs = EXPLICIT_SRCS,
)

filegroup(
name = "excluded",
srcs = EXCLUDED_SRCS,
)
load(":gen.bzl", "docs", "execgen", "gen", "go_proto", "gomock", "misc", "optgen", "stringer")

execgen()

Expand Down
Loading

0 comments on commit 3cb7eb0

Please sign in to comment.