Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unique_id() builtin. #2570

Merged
merged 6 commits into from
Sep 21, 2015
Merged

Add unique_id() builtin. #2570

merged 6 commits into from
Sep 21, 2015

Conversation

petermattis
Copy link
Collaborator

Unique IDs are composed of the current time in milliseconds, a 31-bit
random number and a 32-bit node-id. If two unique IDs are generated within
the same millisecond on a server, the random number is incremented instead
of being generated fresh. This ensures that unique ID generation is
monotonic on a single server and k-sorted across servers.

Fixes #2513.

@@ -428,6 +428,14 @@ func init() {
cmpOps[cmpArgs{In, tupleType, tupleType}] = evalTupleIN
}

// EvalContext defines the context in which to evaluate an expression, allowing
// the retrieval of state such as the node ID or statement start time.
type EvalContext struct {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vivekmenezes This will be of interest to your current_timestamp work. I think you'll just need to add another field here, populate it when the context is created and use it from the current_timestamp function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome!

@tamird
Copy link
Contributor

tamird commented Sep 21, 2015

LGTM

uniqueIDState.rand++
} else {
uniqueIDState.millis = millis
uniqueIDState.rand = uint32(rand.Int31())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What value does the randomness add? It could be used to avoid collisions if the node ID was not present, but since it is there (timestamp, counter, nodeID) seems to be sufficient for uniqueness (resetting the counter to 0 every millisecond. Or not; I don't see a reason to care about the IDs being monotonic per node).

One downside to a counter that starts from 0 is that it leaks a little more information about the state of the cluster (e.g. the rate of ID generation at the time the record was created). But if we're concerned about that we may want to obscure the node ID as well. (timestamp plus a sufficiently large random value would be my preferred solution if we don't want to leak too much information).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we swap the random field for a counter it might also make sense to reorder the fields to (timestamp, nodeID, counter). This way records that were created on the same node would be more likely to be adjacent for future access (although the effectiveness of this will be limited to records created the same millisecond so I'm not sure it makes sense).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure about the randomness either. I was initially following in the footsteps of Twitter's snowflake ID, though clearly we've already diverged from there. The possibilities here are:

  • timestamp+random+nodeID - guaranteed unique.
  • timestamp+counter+nodeID - guaranteed unique, monotonic on a single node.
  • timestamp+random - probabilistically unique. also makes the encoded ID slightly larger.
  • timestamp+nodeID+counter - guaranteed unique, monotonic and sequential on a single node within the timestamp granularity.

The advantage to per-node monotonicity is that the insertion of multiple rows into a table will behave as expected:

  INSERT INTO t VALUES (unique_id(), "hello"), (unique_id(), "world")

Without the per-node monotonicity guarantee, the ordering of these rows would be random. We could definitely document this, but it seemed too surprising when I thought about it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth mentioning that time stamp + counter is equivalent to using our hlc
time stamp, assuming you keep one stamp in nanos, not millis.

On Monday, September 21, 2015, Peter Mattis [email protected]
wrote:

In sql/parser/builtins.go
#2570 (comment):

  • // TODO(pmattis): We could squeeze a bit more space out of the encoding. For
  • // example, we could limit the node-id to 16-bits and the random component to
  • // 16-bits and handle overflow of the rand value by incrementing the
  • // millisecond component.
  • //
  • // TODO(pmattis): Do we have to worry about persisting the milliseconds value
  • // periodically to avoid the clock ever going backwards (e.g. due to NTP
  • // adjustment)?
  • millis := uint64(time.Now().UnixNano() / int64(time.Millisecond))
  • uniqueIDState.Lock()
  • if millis <= uniqueIDState.millis {
  •   millis = uniqueIDState.millis
    
  •   uniqueIDState.rand++
    
  • } else {
  •   uniqueIDState.millis = millis
    
  •   uniqueIDState.rand = uint32(rand.Int31())
    

I'm not entirely sure about the randomness either. I was initially
following in the footsteps of Twitter's snowflake ID, though clearly we've
already diverged from there. The possibilities here are:

  • timestamp+random+nodeID - guaranteed unique.
  • timestamp+counter+nodeID - guaranteed unique, monotonic on a single
    node.
  • timestamp+random - probabilistically unique. also makes the encoded
    ID slightly larger.
  • timestamp+nodeID+counter - guaranteed unique, monotonic and
    sequential on a single node within the timestamp granularity.

The advantage to per-node monotonicity is that the insertion of multiple
rows into a table will behave as expected:

INSERT INTO t VALUES (unique_id(), "hello"), (unique_id(), "world")

Without the per-node monotonicity guarantee, the ordering of these rows
would be random. We could definitely document this, but it seemed too
surprising when I thought about it.


Reply to this email directly or view it on GitHub
https://github.com/cockroachdb/cockroach/pull/2570/files#r39966596.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option:

  • nanos+nodeID where we bump the nanos value if it the clock ever returns the same one. This would be the smallest encoding of the existing proposed options.

@bdarnell
Copy link
Contributor

Why would someone use unique_bytes() instead of unique_int()? Just for the extra nodeID address space? This doesn't seem like something we want to expose, especially for multi-tenant hosted solutions where the number of nodes is not something the client should care (or even know) about.

Another thing to consider: a common trick with bigtable was to prefix your keys with something not strictly ordered (either a hash of something or a random number) so you didn't end up with a single tablet as a hotspot for all your writes. We may want to wait until we get some experience to settle on a final id generation scheme.

@petermattis
Copy link
Collaborator Author

Yes, exposing both unique_bytes() and unique_int() is ugly and awkward. Do you think 1<<15 node-ids is sufficient?

Perhaps these functions should be renamed experimental_unique_bytes() and experimental_unique_int() until we get some experience with them. Not adding them at all sounds like just punting the problem to the future.

@petermattis
Copy link
Collaborator Author

FYI, the bigtable trick can be accomplished using the existing functions:

  SELECT to_hex((random() * 100.0)::int) || unique_bytes()

@bdarnell
Copy link
Contributor

Renaming them to experimental_* SGTM.

Unique IDs are composed of the current time in milliseconds, a 31-bit
random number and a 32-bit node-id. If two unique IDs are generated
within the same millisecond on a server, the random number is
incremented instead of being generated fresh. This ensures that unique
ID generation is monotonic on a single server and k-sorted across
servers.

Fixes #2513.
Currently EvalContext only contains a NodeID field, but this structure
presents the obvious place to add StatementTimestamp,
TransactionTimestamp, etc.

Replaced the global NodeID with EvalContext.NodeID.
@petermattis
Copy link
Collaborator Author

Added the experimental_ prefix.

petermattis added a commit that referenced this pull request Sep 21, 2015
@petermattis petermattis merged commit e5b07f1 into master Sep 21, 2015
@petermattis petermattis deleted the pmattis/sql-unique-id branch September 21, 2015 18:32
@jusongchen
Copy link

@petermattis, This is about type casts. Will cockroach support both of the following?
a) SELECT to_hex((random() * 100.0)::int)
b) SELECT to_hex( cast (random() * 100.0) as int) )

Since the :: form is a postgres historical usage, I would rather cockroach to only support cast(). The :: form looks so odd to me.

regards,
Jusong Chen

@petermattis
Copy link
Collaborator Author

@jusongchen We currently support both syntaxes. Everything new looks odd, so I'm not sure that is a good reason to get rid of it. On the other hand, I don't see a strong reason for keeping the :: syntax other than to make transitioning from postgres easier.

@jusongchen
Copy link

@peter, here are reasons to get rid of the ::syntax
a) This postgres dialect has little value unless cockroachDB wants to be
100% compatible with postgres
b) The more special keywords, operators or delimitors , the harder for a
developer to learn/master the language
c) compared to the ::syntax, cast() is much easier to read and understand

regards,

Jusong

On Tue, Sep 22, 2015 at 7:35 AM, Peter Mattis [email protected]
wrote:

@jusongchen https://github.com/jusongchen We currently support both
syntaxes. Everything new looks odd, so I'm not sure that is a good reason
to get rid of it. On the other hand, I don't see a strong reason for
keeping the :: syntax other than to make transitioning from postgres
easier.


Reply to this email directly or view it on GitHub
#2570 (comment)
.

Jusong Chen
408 315 2288

@petermattis
Copy link
Collaborator Author

@jusongchen Thanks for the feedback. You've definitely got strong opinions about the syntax which we'll take under consideration. If you have the energy, it would be worthwhile to gather a list of all the non-standard syntax in our grammar. Unfortunately, the only way to discover this syntax currently is by looking at the grammar (sql.y).

@jusongchen
Copy link

@pmattis, Regarding non-standard syntax, here are operators postgres
introduced to implement their corresponding math functions. I recommend
cockroach NOT to support these operators as a function like sqrt(2) is
much easier to understand than its operator form.

Operator

Description

^

exponentiation

|/

square root

||/

cube root

!

factorial

!!

factorial (prefix operator)

@

absolute value

regards,
Jusong

On Tue, Sep 22, 2015 at 2:59 PM, Peter Mattis [email protected]
wrote:

@jusongchen https://github.com/jusongchen Thanks for the feedback.
You've definitely got strong opinions about the syntax which we'll take
under consideration. If you have the energy, it would be worthwhile to
gather a list of all the non-standard syntax in our grammar. Unfortunately,
the only way to discover this syntax currently is by looking at the grammar
(sql.y
https://github.com/cockroachdb/cockroach/blob/master/sql/parser/sql.y).


Reply to this email directly or view it on GitHub
#2570 (comment)
.

@petermattis
Copy link
Collaborator Author

@jusongchen I specifically pointed you at our grammar because we already do not support these operators. And to make sure we're on the same page: we would like to support ANSI SQL syntax where it exists, but do not plan to limit ourselves to strict ANSI SQL. You'll be fighting a losing battle if you continue to suggest otherwise.

@petermattis
Copy link
Collaborator Author

@jusongchen Upon re-reading the above, I realize it was overly terse and direct. Your input is definitely desired on these topics. If you feel strongly that CockroachDB should limit itself to strict ANSI SQL, file an issue so that we can discuss and come to a conclusion. I have strong feelings myself about not wanting to limit ourselves to ANSI SQL, but my feelings are less strong when there is functionality overlap (as in the case of the :: operator). So there is discussion to be had and hopefully a conclusion to be reached that will clarify the decision making process as we implement SQL functionality.

craig bot pushed a commit that referenced this pull request Jan 30, 2024
117544: deps: upgrade Shopify/sarama v1.38.1 to IBM/sarama v1.42.1 r=rharding6373 a=wenyihu6

This patch updates sarama library to the latest version. Note that the ownership
of the sarama library has been transferred from Shopify to IBM.

Fixes: #117522
Release note: none

Here is the list of commits between the two versions upgrade. 
```
d88a48a chore: update CHANGELOG.md to v1.42.1 (#2711)
385b3b4 fix(config): relax ClientID validation after 1.0.0 (#2706)
3364ff0 chore(doc): add CODE_OF_CONDUCT.md
768496e chore(ci): bump actions/dependency-review-action from 3.1.0 to 3.1.1 (#2707)
27710af fix: make fetchInitialOffset use correct protocol (#2705)
a46917f chore(ci): bump actions/dependency-review-action from 2.5.1 to 3.1.0 (#2702)
4168f7c chore(ci): bump ossf/scorecard-action from 2.1.2 to 2.3.1 (#2703)
7155d51 chore(ci): add kafka 3.6.0 to FVT and versions
e0c3c62 fix(txmgr): ErrOffsetsLoadInProgress is retriable
2e077cf Fix default retention time value in offset commit (#2700)
f97ced2 Merge pull request #2678 from lzakharov/fix-data-race-in-async-produce
56d5044 fix: data race in Broker.AsyncProduce
a15034a Fix data race on Broker.done channel (#2698)
82f0e48 Asynchronously close brokers during a RefreshBrokers (#2693)
ee1944b chore(ci): bump github/codeql-action from 2.22.4 to 2.22.5 (#2695)
1d4de95 chore(ci): bump actions/upload-artifact from 3.1.0 to 3.1.3 (#2696)
3ca69a8 chore(doc): add OpenSSF Scorecard badge (#2691)
d2023bf feat(test): add a simple fuzzing example (#2039)
b8b29e1 chore(doc): add OpenSSF badge (#2690)
d38f08c fix(ci): always run CodeQL on every commit (#2689)
c5815ae chore(ci): bump github/codeql-action from 2.2.4 to 2.22.4 (#2686)
3a893f5 Merge pull request #2688 from IBM/dnwe/security-dot-md
7ae18cb fix(ci): ignore markdown changes for dep review
3b0f32e fix(doc): add SECURITY.md for vuln reporting
40ec971 chore(ci): bump actions/checkout from 3.1.0 to 4.1.1 (#2687)
25137dc chore(ci): add Dependency Review Actions
8ce03ed chore(ci): add golangci-lint and gitleaks checks
9658e0e chore(ci): ensure GH actions are pinned by hash
3d56b4c chore(ci): ensure gh permissions are explicit
8892f3f chore(ci): add dependabot to /examples tree
05af18e chore(ci): ossf scorecard.yml (#2683)
c42b2e0 fix(client): ignore empty Metadata responses when refreshing (#2672)
6678dd1 chore(deps): bump the golang-org-x group with 1 update (#2671)
24f1249 fix: pre-compile regex for parsing kafka version (#2663)
64d2044 fix(docs): correct topic name in rebalancing strategy example (#2657)
44f6db5 chore(deps): bump the golang-org-x group with 2 updates (#2661)
e16473b chore(ci): bump docker/setup-buildx-action from 2 to 3 (#2653)
98ec384 fix: use least loaded broker to refresh metadata
4b55bb3 perf: Alloc records in batch (#2646)
05cb9fa fix(consumer): don't retry session if ctx canceled
0b17025 chore(deps): bump the golang-org-x group with 1 update (#2641)
9e75986 chore(ci): bump actions/checkout from 3 to 4
9b0419d fix(consumer): guard against nil client
f3c4194 fix: typo
87229d9 fix: add retry logic to AlterUserScramCredentials
ae5eee5 fix(client): force Event Hubs to use V1_0_0_0 (#2633)
dedd86d fix: make clear that error is configuration issue not server error (#2628)
a4eafb4 chore(proto): doc CreateTopics/JoinGroup fields
503ade3 fix: add paragraph break to fix markdown render
ffaa252 fix(gh): correct issue template comments
78c7b63 chore(gh): add new style issue templates
c7e6bca chore(ci): ignore .md-only changes
09395f6 chore(docs): remove gopkg.in link
261043a chore(ci): add workflow_dispatch to stale
bbf6ee4 chore(ci): improve stale behaviour
b1bf950 chore(docs): add 1.41.0 to CHANGELOG.md
2b4ba74 chore(lint): bump golangci-lint and tweak config
9282d75 fix(doc): add missing doc for mock consumer
e9bd1b8 fix(proto): handle V3 member metadata and empty owned partitions
96c37d1 fix(docs): use go install for fetching tools
5cd9fa6 fix: flaky TestFuncOffsetManager
1bcf2d9 feat(fvt): test wider range of kafkas
827ec18 fix(fvt): reduce minimum compression-ratio metric
d44ebdc fix(fvt): fresh metrics registry for each test
2b54832 fix(fvt): ensure correct version in consumer tests
270f507 chore(fvt): tweak to work across more versions
b4e0554 feat(ci): experiment with tcpdump during FVT
0bb3316 fix(fvt): versioned cfg for invalid topic producer
d4dc7bc fix(examples): sync exactly_once and consumergroup
913b18f fix(fvt): Metadata version in ensureFullyReplicated
8681621 fix(fvt): handle msgset vs batchset
26792a3 feat(fvt): add healthcheck, depends_on and --wait
d2dba29 feat(gzip): switch to klauspost/compress gzip (#2600)
f8daee4 chore(deps): bump github.com/eapache/go-resiliency from 1.3.0 to 1.4.0 (#2598)
868ed33 Merge pull request #2595 from IBM/dnwe/close
b0363d1 fix(test): ensure some more clients are closed
31a8693 fix(fvt): disable keepalive on toxiproxy client
45313c3 fix(test): add missing closes to admin client tests (#2594)
a5b6e6a Merge pull request #2593 from IBM/dnwe/toxiproxy
0b9db06 chore(ci): implement toxiproxy client
4d8bb31 chore(ci): replace toxiproxy client dep
3d7b37f feat(fvt): experiment with per-kafka-version image
bd81a11 chore(fvt): tidyup broker await
8d0df91 chore(deps): bump module github.com/klauspost/compress to v1.16.7
6ff3567 chore(test): fix a couple of leaks
f033fc7 chore(deps): bump github.com/eapache/go-xerial-snappy digest to c322873
0409ed9 chore(deps): bump module github.com/jcmturner/gokrb5/v8 to v8.4.4
9dc4305 chore(deps): bump module github.com/pierrec/lz4/v4 to v4.1.18
108e264 chore(fvt): roll some tests back to DefaultVersion
f4e6453 chore(test): use modern protocol versions in FVT
991b2b0 chore(test): speedup some slow tests
fa7db9a chore(ci): pre-build FVT docker image
e31a540 chore(ci): use latest Go in actions (#2580)
500399c chore(test): ensure all mockresponses use version
43eae9b feat: add new error for MockDeleteTopicsResponse (#2475)
4cde6b3 chore(config): make DefaultVersion V2_1_0_0 (#2574)
8a09ef3 Merge pull request #2575 from IBM/dnwe/mockbroker
f4f435c chore(test): add verbose logger for unittests
03368ff chore(test): ensure MockBroker closed within test
e8808a6 chore(proto): match HeartbeatResponse version (#2576)
be809f9 chore(ci): remove manual go cache
a3024e7 chore(test): add V2_1_0_0 ApiVersions
00741ec feat(proto): add support for TxnOffsetCommit V2
0c39b9f feat(proto): add support for ListOffsetsRequest V4
765bfa3 chore(config): make DefaultVersion V2_0_0_0
bb864d7 fix(test): shutdown MockBroker
1c9ebab Merge pull request #2570 from IBM/dnwe/proto
d9cb01e fix(proto): use full ranges for remaining proto
826ef81 fix(proto): use full range of Txn protocols
09c8186 fix: avoid logging value of proxy.Dialer
76ca69a feat(proto): support for Metadata V6-V10
10dd922 fix(proto): use full range of ListGroupsRequest
4175433 fix(proto): use full range of SyncGroupRequest
29487f1 bug: implement unsigned modulus for partitioning with crc32 hashing
f35d212 fix: a rebalance isn't triggered after adding new partitions
4659dd0 chore(deps): bump the golang-org-x group with 1 update (#2561)
d8d9e73 Merge pull request #2558 from IBM/dnwe/proto
e4bf4df fix(proto): tweak LeaveGroupRequest requiredVersion
24b54f6 fix(proto): use full range of OffsetFetchRequest
57969b4 fix(proto): use full range of HeartbeatRequest
3d1c345 fix(proto): use full range of FindCoordinator
cf96776 chore: add .pre-commit-config
02d1209 chore(ci): tidyup and improve actions workflows
09ced0b fix(proto): use full range of MetadataRequest
8c40629 fix(proto): use up to V3 of OffsetRequest
cdf36d5 fix(proto): use range of OffsetCommitRequest vers
1a8a3ed fix(consumer): support JoinGroup V4
40b52c5 fix(consumer): use full range of FetchRequest vers
5530d61 fix(example): check if msg channel is closed
68312a5 chore(test): add V2_0_0_0 ApiVersions
c10bd1e fix(test): test timing error
82a6d57 fix(proto): correct JoinGroup usage for wider version range
23d4561 chore(proto): permit DeleteGroupsRequest V1
6010af0 chore(proto): permit AlterConfigsRequest V1
c32ffd1 chore(proto): permit CreatePartitionsRequest V1
1532d9f fix(proto): ensure req+resp requiredVersion match (#2548)
2a5f0f6 chore: add 1.40.1 to CHANGELOG.md
4cce955 Fix printing of final metrics
1d8f80e fix(test): use correct v7 mock ProduceResponse
973a9b7 fix(producer): use newer ProduceReq as appropriate
fb761f2 feat: support up to v4 of the ListGroups API (#2541)
017083e fix(consumer): use newer LeaveGroup as appropriate (#2544)
ce1ac25 Merge pull request #2538 from IBM/dnwe/is-valid-version
a9126ad fix(proto): use DeleteRecordsRequest v1
b8cc2b1 feat(proto): add test around supported versions
ee2872c fix(admin): remove group member needs >= 2.4.0
3b82606 fix(proto): correct consumer metadata shim
c240c67 fix(proto): extend txn types for identical versions
40fa609 fix(proto): ensure req+resp requiredVersion match
fa37d61 fix(proto): use DescribeLogDirsRequest v1
3dfbf99 feat(proto): add isValidVersion to protocol types
02c5de3 chore(deps): bump the golang-org-x group with 1 update (#2542)
6d094b8 Merge the two CONTRIBUTING.md's (#2543)
bbee916 chore: rollup fvt kafka to latest three (#2537)
87209f8 Merge pull request #2536 from hindessm/mrh/sleep-when-throttled
102513a test: add throttling support test
5ac5dc0 feat: support for sleeping when throttled
7d7ac52 Implement resolve_canonical_bootstrap_servers_only (#2156)
f07b129 Merge pull request #2533 from hindessm/mrh/extend-throttling-metric-scope
b678d34 chore(typo): trivial typo
e18c6cf feat: refactor throttle metrics to handle more responses
2d7ccb8 feat: add throttleTime function for responses with time.Duration fields
fa93017 feat: add throttleTime function for responses with int32 ms fields
ae24dbf chore(typo): fix field documentation typo
aa72f59 fix: avoiding burning cpu if all partitions are paused (#2532)
34bc8f9 fix: correct unsupported version check (#2528)
c28ecc0 fix(fvt): ensure fully-replicated at test start (#2531)
849c8b1 fix(test): allow testing of skipped test without IsTransactional panic (#2525)
ecf43f4 fix: concurrent issue on updateMetaDataMs (#2522)
e07f521 Merge pull request #2520 from hindessm/mrh/admin-retry-logic
aad8cf3 fix: add retry logic to ListPartitionReassignments
66ef5a9 fix: add retry logic to DescribeCluster
c7ce32f fix: add retry logic to DescribeTopics
80899bf Added support for Kerberos authentication with a credentials cache (KRB5_CCACHE_AUTH). (#2457)
735f33b feat(consumer): use buffer pools for decompression (#2484)
669d2bc fix: comments for PauseAll and ResumeAll (#2523)
63ff8d1 Merge pull request #2519 from hindessm/mrh/fix-retry-logic-again
df58534 fix: use safer condition
3ba807b fix: admin retry logic
08ff0ff Merge pull request #2517 from hindessm/mrh/fix-some-retry-issues
39c18fc fix: off-by-one errors in attempts remaining logging
7888004 fix: admin retry logic
6ecdb50 Merge pull request #2514 from hindessm/main
f6ccc6f fix(test): ubi-minimal seems to be missing zoneinfo files
b53dbe9 fix(tools): remove default duplication from help
d439508 chore(docs): fix iotuil.Discard reference
441b083 chore(typos): random typos spotted while browsing code
12c24a8 chore(deps): bump github.com/stretchr/testify from 1.8.1 to 1.8.3 (#2512)
55ea700 chore(deps): bump github.com/klauspost/compress from 1.15.14 to 1.16.6 (#2513)
2420fcd chore(deps): bump the golang-org-x group with 2 updates
cc72418 chore(ci): bump actions/setup-go from 3 to 4 (#2508)
dcf5196 chore(ci): remove empty scope from dependabot.yml
fb81408 Merge pull request #2504 from EladLeev/golangci-cleanup
5185d46 chore(ci): tweak scope in dependabot.yml
492d4f9 chore(ci): remove scope from dependabot.yml
eb52957 chore(ci): add dependabot.yml
da48ff2 chore(ci): add depguard config
3654162 chore: add 1.40.0 to CHANGELOG.md
0863085 chore(ci): bump golangci, remove deprecated linters
848522f chore(ci): add simple apidiff workflow
ee207f8 chore(ci): fix stale action params
3f22fd3 chore(ci): migrate probot-stale to actions/stale
4b9e8f6 fix: restore (*OffsetCommitRequest) AddBlock func
c2cab9d chore: migrate module to github.com/IBM/sarama
fd35e17 chore: bytes.Equal instead bytes.Compare (#2485)
fd21bd2 chore(ci): remove Shopify/shopify-cla-action (#2489)
9127f1c fix: data race in balance strategy (#2453)
2f8dcd0 fix(mock consumer): HighWaterMarkOffset (#2447)
7dbf0b5 fix: use version 4 of DescribeGroupsRequest only if kafka broker version is >= 2.4
1015b4f chore(deps): bump golang.org/x/net from 0.5.0 to 0.7.0 (#2452)
397cee4 fix: simplify some balance_strategy.go logic
d8bcfcc chore: refresh CHANGELOG.md from github-releases
40329aa chore: add kafka 3.3.2 (#2434)
0b15695 fix(metrics): fix race condition when calling Broker.Open() twice (#2428)
66e60c7 fix(consumer): don't retry FindCoordinator forever (#2427)
```

117678: roachtest: add elastic backup equivalent test for aws r=sumeerbhola a=aadityasondhi

Informs #107770.

Release note: None

Co-authored-by: Wenyi Hu <[email protected]>
Co-authored-by: Aaditya Sondhi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants