Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bazel: need a way to --rewrite all test files in a directory hierarchy #82053

Closed
rytaft opened this issue May 30, 2022 · 0 comments · Fixed by #82471
Closed

bazel: need a way to --rewrite all test files in a directory hierarchy #82053

rytaft opened this issue May 30, 2022 · 0 comments · Fixed by #82471
Assignees
Labels
A-build-system C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-dev-inf

Comments

@rytaft
Copy link
Collaborator

rytaft commented May 30, 2022

I'd like to be able to rewrite all tests in the optimizer directory.

I'd expect something like this to work: ./dev test pkg/sql/opt/... --rewrite

But I get these errors:

becca@crlMBP-HR9LPFMH9JMTkz cockroach % ./dev test pkg/sql/opt/... --rewrite
$ bazel test pkg/sql/opt/...:all --nocache_test_results --test_env=GOTRACEBACK=all --test_env=COCKROACH_WORKSPACE=/Users/becca/go/src/github.com/cockroachdb/cockroach --test_arg -rewrite --sandbox_writable_path=/Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... --test_sharding_strategy=disabled --test_output errors
INFO: Invocation ID: 03c503a2-6495-479a-9be0-7ea2a0498d09
INFO: Analyzed 88 targets (0 packages loaded, 0 targets configured).
INFO: Found 57 targets and 31 test targets...
ERROR: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/exec/explain/BUILD.bazel:40:8: GoCompilePkg pkg/sql/opt/exec/explain/explain_test.internal.recompileinternal.a failed: I/O exception during sandboxed execution: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... (No such file or directory)
ERROR: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/partition/BUILD.bazel:21:8: GoTestGenTest pkg/sql/opt/partition/partition_test_/testmain.go failed: I/O exception during sandboxed execution: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... (No such file or directory)
ERROR: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/opbench/BUILD.bazel:3:11: GoCompilePkg pkg/sql/opt/opbench/opbench.a failed: I/O exception during sandboxed execution: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... (No such file or directory)
ERROR: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/exec/explain/BUILD.bazel:40:8: GoTestGenTest pkg/sql/opt/exec/explain/explain_test_/testmain.go failed: I/O exception during sandboxed execution: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... (No such file or directory)
ERROR: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/partition/BUILD.bazel:21:8: GoCompilePkg pkg/sql/opt/partition/partition_test.internal.a failed: I/O exception during sandboxed execution: /Users/becca/go/src/github.com/cockroachdb/cockroach/pkg/sql/opt/... (No such file or directory)
INFO: Elapsed time: 0.267s, Critical Path: 0.02s
INFO: 8 processes: 8 internal.
FAILED: Build did NOT complete successfully
//pkg/sql/opt:opt_test                                                NO STATUS
//pkg/sql/opt/bench:bench_test                                        NO STATUS
//pkg/sql/opt/cat:cat_test                                            NO STATUS
//pkg/sql/opt/constraint:constraint_test                              NO STATUS
//pkg/sql/opt/cycle:cycle_test                                        NO STATUS
//pkg/sql/opt/distribution:distribution_test                          NO STATUS
//pkg/sql/opt/exec/execbuilder:execbuilder_test                       NO STATUS
//pkg/sql/opt/idxconstraint:idxconstraint_test                        NO STATUS
//pkg/sql/opt/indexrec:indexrec_test                                  NO STATUS
//pkg/sql/opt/invertedexpr:invertedexpr_test                          NO STATUS
//pkg/sql/opt/invertedidx:invertedidx_test                            NO STATUS
//pkg/sql/opt/lookupjoin:lookupjoin_test                              NO STATUS
//pkg/sql/opt/memo:memo_disallowed_imports_test                       NO STATUS
//pkg/sql/opt/memo:memo_test                                          NO STATUS
//pkg/sql/opt/norm:norm_test                                          NO STATUS
//pkg/sql/opt/opbench:opbench_test                                    NO STATUS
//pkg/sql/opt/optbuilder:optbuilder_test                              NO STATUS
//pkg/sql/opt/optgen/cmd/optfmt:optfmt_test                           NO STATUS
//pkg/sql/opt/optgen/cmd/optgen:optgen_test                           NO STATUS
//pkg/sql/opt/optgen/exprgen:exprgen_test                             NO STATUS
//pkg/sql/opt/optgen/lang:lang_test                                   NO STATUS
//pkg/sql/opt/ordering:ordering_test                                  NO STATUS
//pkg/sql/opt/partialidx:partialidx_test                              NO STATUS
//pkg/sql/opt/partition:partition_test                                NO STATUS
//pkg/sql/opt/props:props_test                                        NO STATUS
//pkg/sql/opt/props/physical:physical_test                            NO STATUS
//pkg/sql/opt/testutils:testutils_test                                NO STATUS
//pkg/sql/opt/testutils/opttester:opttester_test                      NO STATUS
//pkg/sql/opt/testutils/testcat:testcat_test                          NO STATUS
//pkg/sql/opt/xform:xform_test                                        NO STATUS
//pkg/sql/opt/exec/explain:explain_test                         FAILED TO BUILD

FAILED: Build did NOT complete successfully
ERROR: exit status 36
becca@crlMBP-HR9LPFMH9JMTkz cockroach % 

Even if some directories produce errors since there is nothing to --rewrite, I'd like the other directories to succeed.

Jira issue: CRDB-16188

Epic CRDB-8036

@rytaft rytaft added the C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) label May 30, 2022
@rickystewart rickystewart self-assigned this Jun 6, 2022
craig bot pushed a commit that referenced this issue Jun 7, 2022
…82521

79748: streamingccl: add a builtin function that retrives stream ingestion statitics r=gh-casper a=gh-casper

Support crdb_internal.stream_ingestion_stats(ingestion_job_id) to
running progress for a stream ingestion job.

Release note (sql change): Support crdb_internal.stream_ingestion_stats
(ingestion_job_id) to running progress for a stream ingestion job.

Release Justification: Cat 4.

Closes: #57430

81598: obsservice: initial commit r=andreimatei a=andreimatei

These are the humble beginnings of an Observability Service for CRDB. In
time, this is meant to take over a lot of the observability
functionality, extracting it from the database itself. For now, all
there is here is an HTTP reverse proxy, able to route HTTP requests to a
CRDB node (or cluster, through a load balancer). At least for the
transitioning period, if not forever, the Obs Service will route HTTP
paths that it doesn't understand to the Cockroach cluster being
monitored.

The HTTP proxy accepts its own certificates and can serve HTTPS
independently of CRDB (which might be running in --insecure mode and not
serving HTTPS). However, if CRDB is serving HTTPS, then the Obs Service
must also be configured with a certificate so it can also serve HTTPS.

The code of the Obs Service is split in between a binary (minimal
shell), and a library. The idea is to keep the door open for other repos
to extend the Obs Service with extra functionality (e.g. the CC managed
service might want to add cloud-specific observability features). For
such purposes, I've considered making a dedicated Go module for the Obs
Service so that source-level imports of the Obs Service would be as
ergonomic as Go wants them to be. I've put a decent amount of work in
playing with this but, ultimately, I've decided against it, at least for
now. The problem is that the Obs Service wants to take minor code
dependencies on random CRDB libraries (syncutil, log, etc.) and the
cockroach Go module is fairly broken (our git tags look like we do
semantic versioning, but we don't actually do it). The go tooling has a
terrible time with the cockroach module. Also, our code is generally not
`go build`-able. If the obs service was a dedicated module, it would
need to take a dependency on the cockroach module, which would negate
the win for people who want to import it (in fact, it'd make matters
worse for importers because the nasty cockroach dependency would be more
hidden). An alternative I've considered was to start creating modules
for all of the dependencies of the Obs Service. But, if CRDB would then
need to depend on these modules, there's all sorts of problems.

Release note: None

82041: storage: clear range keys in `Writer.Clear*Range` methods r=jbowens a=erikgrinaker

This patch clears range keys in the `Writer` methods `ClearRawRange`,
`ClearMVCCRange`, and `ClearMVCCIteratorRange`, as well as in the
`ClearRangeWithHeuristic` helper.

Range keys are not cleared in `ClearMVCCVersions`, since this method is
specifically for clearing MVCC point key versions, and it is not
possible to clear range keys between versions of the same point key.

Touches #70412.

Release note: None

82377: sql: use declarative schema changer for add column with unique r=fqazi a=fqazi

Previously, the declarative schema changer was disabled
when adding a column with unique constraint, since we
didn't have a complete set of elements / logic for adding
secondary indexes. This was inadequate because operations
would not ordered correctly and rollbacks could potentially
break. To address this, this patch enables support and
addresses the missing pieces to setup the secondary indexes
correctly, such as adding suffix columns, ordering operations
correctly, and returning appropriate errors.

Release note: None

82380: util/log: rework the bufferSink r=andreimatei a=andreimatei

This is a rewrite of bufferSink (now called bufferedSink). As opposed to
before, the code is smaller, more efficient, and, most importantly,
simpler. I couldn't understand the previous code well, and I think it
made new additions to it (in particular, adding a shutdown sequence)
hard to analyze. The semantics of this buffer are now better and simpler.

Before, a log message was traveling through a sequence of two buffers
before being eventually delivered to the wrapped sink:
1. The message was first placed into an accumulator buffer.
2. Then, when that buffer grows too old or too large, the accumulator
   buffer is placed into a queue to be flushed later, and the
   accumulator was reset.
3. There was a limit on the size of this queue of batches to
   be flushed. When the queue was full, new messages were dropped.
There were two goroutines managing these two buffers, which is one too
many.

This behavior of queuing multiple, distinct flushes, was peculiar at
best. There's generally no reason to not coalesce all the flushes into
one, thus speeding them up and reducing the risk of message drops. The
bufferSink's memory limit was not explicit, and difficult enough to
infer: it was set indirectly from the combination of limits on the two
buffers, expressed in different units.

Now, the sequence of two buffers (the accumulator and the flush queue)
is gone. There's a single buffer accumulating new messages, plus at most
one in-flight flush which captures its buffer. As soon as a flush is
triggered, the buffer is reset. The buffer is now configured with a
memory limit and, when that limit is reached, the _oldest_ messages in
the buffer are dropped, instead of the newest. This, combined with the
blocking semantics of the forceSync option, means that a forceSync
message is never dropped (unless its size is, by itself, larger than the
buffer limit). The number of goroutines running for the bufferSink drops
from two to one.

Release note: None

82436: server, ui: handle null plan gist in getStatementDetailsPerPlanHash r=ericharmeling a=ericharmeling

This PR adds a new function that handles null plan gists in `getStatementDetailsPerPlanHash`. The PR also adds some logic to hide null plan gists in the SqlBox of the Statement Details page.

Here's a screenshot of the Statement Details page when the plan gist is null:

<img width="1024" alt="Screen Shot 2022-06-03 at 7 18 59 PM" src="https://user-images.githubusercontent.com/27286675/171966978-6a444297-70d4-4ef3-8afa-5068e3f35d9e.png">


Fixes #82095.

We probably want to figure out why there are null values in [`planner.instrumentation.planGist`](https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/executor_statement_metrics.go#L212) in the first place. `getStatementDetailsPerPlanHash` is pulling from the system table populated with the plan gist from `planner.instrumentation.planGist`. CC `@cucaroach`

Release note: None

82471: dev: allow `--rewrite` when testing many targets with `/...` r=rail a=rickystewart

Closes #82053.

Release note: None

82479: keys: resolve subtle non-bug by exporting RaftLogKeyFromPrefix r=tbg,erikgrinaker a=nvanbenschoten

This commit resolves a subtle interaction that came tantalizingly close to a bug in `StateLoader.LoadLastIndex`. The method uses the `StateLoader`'s underlying `keys.RangeIDPrefixBuf` to generate two keys (`RaftLogPrefix` and `RaftLogKey`) that are in use as the same time.

`RangeIDPrefixBuf` avoids heap allocations by sharing a single byte slice across all keys that it generates. This is why the type has the comment:
> The generated keys are only valid until the next call to one of the key generation methods.

As would be expected, given this comment, the second use of the `RangeIDPrefixBuf` overwrote the buffer and invalidated the first key. However, it happened to get lucky and generate a new key with the same prefix as the old key. As a result, the contents of the first key did not change.

To make this aliasing more explicit and avoid this becoming a real bug in the future, we introduce a new `RaftLogKeyFromPrefix` function that callers can use to generate raft log entry keys from a raft log prefix.

We then use this new function to avoid some redundant encoding work elsewhere due to repeated calls to `RaftLogKey`.

82520: cli,sql: fix reported result columns for EXPLAIN ANALYZE r=yuzefovich a=rafiss

Fixes #82502

This fixes an issue where EXPLAIN ANALYZE would report a RowDescription
for both the EXPLAIN and for the statement being explained. If the
statement had a different number of result columns, this would confuse
the CLI.

No release note, since this bug was not released.

Release note: None

82521: roachtest: fix a minor bug with tpch_concurrency roachtest r=yuzefovich a=yuzefovich

`tpch_concurrency` pushes the cluster to its limits, so it is expected
that OOMs occur. Previously, in a recently added code path, we would
fail the test if an error occurred when running a debugging statement,
yet that error is expected and should be returned.

Fixes: #82510.

Release note: None

Co-authored-by: Casper <[email protected]>
Co-authored-by: Andrei Matei <[email protected]>
Co-authored-by: Erik Grinaker <[email protected]>
Co-authored-by: Faizan Qazi <[email protected]>
Co-authored-by: Eric Harmeling <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
Co-authored-by: Nathan VanBenschoten <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
@craig craig bot closed this as completed in e7bc3b3 Jun 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-build-system C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-dev-inf
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants