Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

streaming: update random stream client to generate keys for an entire tenant #59175

Closed
pbardea opened this issue Jan 20, 2021 · 1 comment
Closed
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-disaster-recovery

Comments

@pbardea
Copy link
Contributor

pbardea commented Jan 20, 2021

Based on our conversation, the stream generator will produce the descriptor/namespace keys (either on its own partition or on a chosen partition), then we can emit the generated keys as we do today. Additionally, we could create a partition for each table created in the workload.

@pbardea pbardea added the C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. label Jan 20, 2021
craig bot pushed a commit that referenced this issue Feb 9, 2021
59441: streamingccl: improvements to the random stream test client r=pbardea a=adityamaru

This change improves on the random stream client to allow for better
testing of the various components of the stream ingestion job.
Specifically:

- Adds support for specifying number of partitions. For simplicity,
  a partition generates KVs for a particular table span.

- Generates system KVs (descriptor and namespace) KVs, as the first two
  KVs on the partition stream. I played around with the idea of having a
separate "system" and "table data" partition, but the code and tests
became more convoluted, compared to the current approach.

- Hookup the CDC orderValidator to the random stream client's output.
  This gives us some guarantees that the data being generated is
semantically correct.

- Maintain an in-memory copy of all the streamed events, that can be
  efficiently queried. This allows us to compare the ingested KVs to the
streamed KVs and gain more confidence in our pipeline.

Infroms: #59175

Release note: None

59621: pgwire: set options based on "options" URL parameter r=rafiss a=mneverov

pgwire: set options based on "options" URL parameter

Previously, CRDB ignored "options" URL parameter. User session parameters should
have been set via URL parameters directly:
`postgres://user@host:port/database?serial_normalization=virtual_sequence`

CRDB can now parse "options" URL parameter and set corresponding session
parameters (in compliance with Postgres jdbc connection parameters):
`postgres://user@host:port/database?options=-c%20serial_normalization=virtual_sequence`

Fixes #59404

Release note (sql change): CockroachDB now recognizes "options" URL parameter.

59781: sql,metrics: do not increment ROLLBACK counter if in CommitWait r=arulajmani a=rafiss

fixes #50780 

Release note (bug fix): Previously if `RELEASE SAVEPOINT cockroach_restart`
was followed by `ROLLBACK`, the `sql.txn.rollback.count`
metric would be incremented. This was incorrect, since the txn had already
committed. Now that metric is not incremented in this case.

Co-authored-by: Aditya Maru <[email protected]>
Co-authored-by: Max Neverov <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
craig bot pushed a commit that referenced this issue Feb 18, 2021
59588: streamingest: add job level test for stream ingestion r=pbardea a=adityamaru

This change adds a test that exercises all the components of the stream
ingestion flow. It fixes some missing pieces that were discovered while
writing the test.

Informs: #59175

Release note: None

60424: sql: include sampled stats in TestSampledStatsCollection r=yuzefovich a=asubiotto

Depends on #59992, which is required for this new regression test to pass.

TestSampledStatsCollection would previously only check that stats that are
collected regardless of the sample rate are returned. These types of stats
(rows/bytes read) are propagated using metadata, rather than the trace.

This resulted in us silently failing to collect any stats when sampling was
enabled once the tracing mode was reverted back to legacy. To avoid this kind
of thing happening again, this commit adds a check that max memory usage is
reported to be non-zero.

Release note: None (this is a new feature that has no user impact yet)

60626: kvserver: initialize propBuf LAI tracking r=andreimatei a=andreimatei

The initialization of the LAI tracking in the proposal buffer seems
pretty lacking (see #60625). This patch adds initialization of
propBuf.liBase at propBuf.Init() time, which is irrelevant for
production, but will help future tests which will surely want the
a propBuf's first assigned LAIs to have some relationship to the replica
state.

Release note: None

Co-authored-by: Aditya Maru <[email protected]>
Co-authored-by: Alfonso Subiotto Marques <[email protected]>
Co-authored-by: Andrei Matei <[email protected]>
@pbardea
Copy link
Contributor Author

pbardea commented Mar 16, 2021

This was done, I believe in #59588.

@pbardea pbardea closed this as completed Mar 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-disaster-recovery
Projects
None yet
Development

No branches or pull requests

2 participants