Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
82161: ui: Add Jest as test runner to DB Console r=nathanstilwell a=nathanstilwell

DB Console is the last place Cockroach Labs is using a test runner other than [Jest](https://jestjs.io/). This PR adds Jest as the test runner intended to replace [Mocha](https://mochajs.org/). Mocha runs in a headless browser via [Karma](https://karma-runner.github.io/latest/index.html) whereas Jest will run tests in NodeJS and simulate a browser environment using [jsdom](https://github.com/jsdom/jsdom). Due to this change in environment, you will see not only files to set up the Jest test runner, but changes to some tests, some mocking of browser globals that are not included in jsdom by default, and some configuration adjustment to the `tsconfig.json`.

Since configuration changes are infrequent and are highly contextual, we decided to err on the side of verbose inline documentation in configuration files.

Details about individual changes to configs or tests are documented in commit messages.

84068: streamingccl: fix span use-after-finish in ingestion frontier r=samiskin a=stevendanna

This fixes the following use-after-finish panic:

    panic: use of Span after Finish. Span: ingestfntr. Finish previously
    called at: <stack not captured. Set debugUseAfterFinish>

    goroutine 1617744 [running]:
    github.com/cockroachdb/cockroach/pkg/util/tracing.(*Span).detectUseAfterFinish(0xc002c5f180)
    	github.com/cockroachdb/cockroach/pkg/util/tracing/span.go:186 +0x279
    github.com/cockroachdb/cockroach/pkg/util/tracing.(*Tracer).startSpanGeneric(0xc0138b2a50, {0x72291c0, 0xc019ebbe40}, {0x68e4eb2, 0x1d}, {{0x0}, 0x0, {0x0, 0x0, {{0x0, ...}, ...}, ...}, ...})
    	github.com/cockroachdb/cockroach/pkg/util/tracing/tracer.go:1207 +0x997
    github.com/cockroachdb/cockroach/pkg/util/tracing.(*Tracer).StartSpanCtx(0xc0138b2a50, {0x72291c0, 0xc019ebbe40}, {0x68e4eb2, 0x1d}, {0xc010cbb760, 0x1, 0x1})
    	github.com/cockroachdb/cockroach/pkg/util/tracing/tracer.go:1062 +0x1a7
    github.com/cockroachdb/cockroach/pkg/util/tracing.ChildSpan({0x72291c0, 0xc019ebbe40}, {0x68e4eb2, 0x1d})
    	github.com/cockroachdb/cockroach/pkg/util/tracing/tracer.go:1577 +0x145
    github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamclient.(*partitionedStreamClient).Heartbeat(0xc00b311080, {0x72291c0, 0xc019ebbe40}, 0xac93aa873318001, {0x16ffc388c7d7c42a, 0x0, 0x0})
    	github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamclient/partitioned_stream_client.go:83 +0xb7
    github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest.(*heartbeatSender).maybeHeartbeat(0xc01980f880, {0x72291c0, 0xc019ebbe40}, {0x16ffc388c7d7c42a, 0x0, 0x0})
    	github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_frontier_processor.go:180 +0x250
    github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest.(*heartbeatSender).startHeartbeatLoop.func1.1()
    	github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_frontier_processor.go:207 +0x41b
    github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest.(*heartbeatSender).startHeartbeatLoop.func1({0x72291c0, 0xc019ebbe40})
    	github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_frontier_processor.go:227 +0xb5
    github.com/cockroachdb/cockroach/pkg/util/ctxgroup.Group.GoCtx.func1()
    	github.com/cockroachdb/cockroach/pkg/util/ctxgroup/ctxgroup.go:169 +0x52
    golang.org/x/sync/errgroup.(*Group).Go.func1()
    	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:74 +0xb4
    created by golang.org/x/sync/errgroup.(*Group).Go
    	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:71 +0xdd
    I220708 05:29:42.017167 1 (gostd) testmain.go:90  [-] 1  Test //pkg/ccl/streamingccl/streamingest:streamingest_test exited with error code 2

The use after finish was caused by goroutines in the ingestion
frontier processor that lived past a call to
(*ProcessorBase).InternalClose, which finishes the span attached to
the context passed to the processor in Start.

To address this we:

- ensure that we stop our heartbeat thread before calling
  InternalClose in ConsumerClosed, and

- provide a TrailingMetaCallback so that we can perform cleaned when
  DrainHelper() is called. When a TrailingMetaCallback is provided,
  DrainHelper() calls it instead of InternalClose(), allowing us to
  correctly clean up before the span is closed.

- use context cancellation rather than a channel to control the
  heartbeat loop exit to avoid having to deal with avoiding
  double-closes of the channel.

Fixes #84054

Release note: None

84100: kvserver: Clean up empty range directories after snapshots r=nicktrav a=itsbilal

Previously, we were creating subdirectories for ranges and
range snapshots in the auxiliary directory every time we
accepted a snapshot, but only cleaning up the snapshot
subdirectories after a snapshot scratch space closed. This
left empty parent range directories around on the FS,
slowing down future calls to Pebble.Capacity() and indirectly
slowing down AddSSTable in the future.

This change adds code to clean up empty range directories
in the aux directory if they're not being used. Some coordination
and synchronization code had to be added to ensure we wouldn't
remove a directory that was just created by a concurrent snapshot.

Fixes #83137 

Release note (bug fix, performance improvement): Addresses issue where
imports and rebalances were being slowed down due to the accumulation
of empty directories from range snapshot applications.

84170: sql/sqlstats: record QuerySummary when merging stats r=ericharmeling a=stevendanna

During execution of a transaction, all statement statistics are
collected in an struct local to that transaction, and then flushed to
the main ApplicationStats container when the transaction finishes.

Previously, when flushing, we failed to copy the QuerySummary field,
leading to `metadata->'querySummary'` from being empty in most cases.

Prior to ce1b42b this only affected
statements in an explicit transaction. After that commit, it affected
all statements.

Release note (bug fix): Fix a bug that led to the querySummary field
in crdb_internal.statements_statistics's metadata column being empty.

84194: opt: mark SimplifyRootOrdering as an essential rule r=mgartner a=mgartner

The unoptimized query oracle, which disables rules, found a bug in the
execution engine that is only possible to hit if the
`SimplifyRootOrdering` rule is disabled (see #84191). Until the bug is
fixed, we mark the rule as essential so that it is not disabled by these
tests.

Fixes #84067

Release note: None

Co-authored-by: Nathan Stilwell <[email protected]>
Co-authored-by: Sean Barag <[email protected]>
Co-authored-by: Steven Danna <[email protected]>
Co-authored-by: Bilal Akhtar <[email protected]>
Co-authored-by: Marcus Gartner <[email protected]>
  • Loading branch information
6 people committed Jul 11, 2022
6 parents bef1101 + 24bee22 + b32cd1e + 73c5980 + f880548 + 37ed376 commit 309e100
Show file tree
Hide file tree
Showing 51 changed files with 3,124 additions and 330 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -1433,7 +1433,7 @@ ui-lint: pkg/ui/yarn.installed $(ESLINT_PLUGIN_CRDB) $(UI_PROTOS_OSS) $(UI_PROTO
.PHONY: ui-test
ui-test: $(UI_PROTOS_OSS) $(UI_PROTOS_CCL) $(CLUSTER_UI_JS)
$(info $(yellow)NOTE: consider using `./dev ui test` instead of `make ui-test`$(term-reset))
$(NODE_RUN) -C pkg/ui/workspaces/db-console $(KARMA) start
$(NODE_RUN) -C pkg/ui/workspaces/db-console yarn test
$(NODE_RUN) -C pkg/ui/workspaces/cluster-ui yarn ci

.PHONY: ui-test-watch
Expand Down
2 changes: 1 addition & 1 deletion build/teamcity/cockroach/ci/tests/ui_test_impl.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
set -xeuo pipefail

bazel build //pkg/cmd/bazci --config=ci
$(bazel info bazel-bin --config=ci)/pkg/cmd/bazci/bazci_/bazci test --config=ci //pkg/ui/workspaces/db-console:karma //pkg/ui/workspaces/cluster-ui:jest
$(bazel info bazel-bin --config=ci)/pkg/cmd/bazci/bazci_/bazci test --config=ci //pkg/ui/workspaces/db-console:jest //pkg/ui/workspaces/cluster-ui:jest
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,10 @@ func newStreamIngestionFrontierProcessor(
nil, /* memMonitor */
execinfra.ProcStateOpts{
InputsToDrain: []execinfra.RowSource{sf.input},
TrailingMetaCallback: func() []execinfrapb.ProducerMetadata {
sf.close()
return nil
},
},
); err != nil {
return nil, err
Expand All @@ -146,8 +150,8 @@ type heartbeatSender struct {
flowCtx *execinfra.FlowCtx
// cg runs the heartbeatSender thread.
cg ctxgroup.Group
// Send signal to stopChan to stop heartbeat sender.
stopChan chan struct{}
// cancel stops heartbeat sender.
cancel func()
// heartbeatSender closes this channel when it stops.
stoppedChan chan struct{}
}
Expand All @@ -164,7 +168,7 @@ func newHeartbeatSender(
streamID: streaming.StreamID(spec.StreamID),
flowCtx: flowCtx,
frontierUpdates: make(chan hlc.Timestamp),
stopChan: make(chan struct{}),
cancel: func() {},
stoppedChan: make(chan struct{}),
}, nil
}
Expand All @@ -182,6 +186,8 @@ func (h *heartbeatSender) maybeHeartbeat(
}

func (h *heartbeatSender) startHeartbeatLoop(ctx context.Context) {
ctx, cancel := context.WithCancel(ctx)
h.cancel = cancel
h.cg = ctxgroup.WithContext(ctx)
h.cg.GoCtx(func(ctx context.Context) error {
sendHeartbeats := func() error {
Expand All @@ -196,8 +202,6 @@ func (h *heartbeatSender) startHeartbeatLoop(ctx context.Context) {
select {
case <-ctx.Done():
return ctx.Err()
case <-h.stopChan:
return nil
case <-timer.C:
timer.Reset(streamingccl.StreamReplicationConsumerHeartbeatFrequency.
Get(&h.flowCtx.EvalCtx.Settings.SV))
Expand Down Expand Up @@ -231,15 +235,20 @@ func (h *heartbeatSender) startHeartbeatLoop(ctx context.Context) {
}

// Stop the heartbeat loop and returns any error at time of heartbeatSender's exit.
// Should be called at most once.
// Can be called multiple times.
func (h *heartbeatSender) stop() error {
close(h.stopChan) // Panic if closed multiple times
return h.cg.Wait()
h.cancel()
return h.wait()
}

// Wait for heartbeatSender to be stopped and returns any error.
func (h *heartbeatSender) err() error {
return h.cg.Wait()
func (h *heartbeatSender) wait() error {
err := h.cg.Wait()
// We expect to see context cancelled when shutting down.
if errors.Is(err, context.Canceled) {
return nil
}
return err
}

// Start is part of the RowSource interface.
Expand Down Expand Up @@ -305,25 +314,31 @@ func (sf *streamIngestionFrontier) Next() (
// If heartbeatSender has error, it means remote has error, we want to
// stop the processor.
case <-sf.heartbeatSender.stoppedChan:
err := sf.heartbeatSender.err()
log.Warningf(sf.Ctx, "heartbeat sender has stopped with error: %s", err)
err := sf.heartbeatSender.wait()
if err != nil {
log.Errorf(sf.Ctx, "heartbeat sender exited with error: %s", err)
}
sf.MoveToDraining(err)
return nil, sf.DrainHelper()
}
}
return nil, sf.DrainHelper()
}

// ConsumerClosed is part of the RowSource interface.
func (sf *streamIngestionFrontier) ConsumerClosed() {
func (sf *streamIngestionFrontier) close() {
if err := sf.heartbeatSender.stop(); err != nil {
log.Errorf(sf.Ctx, "heartbeat sender exited with error: %s", err)
}
if sf.InternalClose() {
if err := sf.heartbeatSender.stop(); err != nil {
log.Errorf(sf.Ctx, "heartbeatSender exited with error: %s", err.Error())
}
sf.metrics.RunningCount.Dec(1)
}
}

// ConsumerClosed is part of the RowSource interface.
func (sf *streamIngestionFrontier) ConsumerClosed() {
sf.close()
}

// decodeResolvedSpans decodes an encoded datum of jobspb.ResolvedSpans into a
// jobspb.ResolvedSpans object.
func decodeResolvedSpans(
Expand Down
6 changes: 3 additions & 3 deletions pkg/cmd/dev/testdata/datadriven/ui
Original file line number Diff line number Diff line change
Expand Up @@ -94,12 +94,12 @@ bazel test //pkg/ui:lint --test_output all
exec
dev ui test
----
bazel test //pkg/ui/workspaces/db-console:karma //pkg/ui/workspaces/cluster-ui:jest --test_output errors
bazel test //pkg/ui/workspaces/db-console:jest //pkg/ui/workspaces/cluster-ui:jest --test_output errors

exec
dev ui test --verbose
----
bazel test //pkg/ui/workspaces/db-console:karma //pkg/ui/workspaces/cluster-ui:jest --test_output all
bazel test //pkg/ui/workspaces/db-console:jest //pkg/ui/workspaces/cluster-ui:jest --test_output all

exec
dev ui test test --watch
Expand All @@ -114,7 +114,7 @@ cp sandbox/pkg/ui/workspaces/db-console/ccl/src/js/protos.d.ts crdb-checkout/pkg
rm -rf crdb-checkout/pkg/ui/workspaces/cluster-ui/dist
cp -r sandbox/pkg/ui/workspaces/cluster-ui/dist crdb-checkout/pkg/ui/workspaces/cluster-ui/dist
bazel info workspace --color=no
bazel run @yarn//:yarn -- --silent --cwd crdb-checkout/pkg/ui/workspaces/db-console karma:watch
bazel run @yarn//:yarn -- --silent --cwd crdb-checkout/pkg/ui/workspaces/db-console
bazel run @yarn//:yarn -- --silent --cwd crdb-checkout/pkg/ui/workspaces/cluster-ui jest --watch

exec
Expand Down
3 changes: 1 addition & 2 deletions pkg/cmd/dev/ui.go
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,6 @@ Replaces 'make ui-test' and 'make ui-test-watch'.`,
"--silent",
"--cwd",
dirs.dbConsole,
"karma:watch",
)

env := append(os.Environ(), "BAZEL_TARGET=fake")
Expand Down Expand Up @@ -490,7 +489,7 @@ Replaces 'make ui-test' and 'make ui-test-watch'.`,
)
args := append([]string{
"test",
"//pkg/ui/workspaces/db-console:karma",
"//pkg/ui/workspaces/db-console:jest",
"//pkg/ui/workspaces/cluster-ui:jest",
}, testOutputArg...)

Expand Down
58 changes: 56 additions & 2 deletions pkg/kv/kvserver/replica_sst_snapshot_storage.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/roachpb"
"github.com/cockroachdb/cockroach/pkg/storage"
"github.com/cockroachdb/cockroach/pkg/storage/fs"
"github.com/cockroachdb/cockroach/pkg/util/syncutil"
"github.com/cockroachdb/cockroach/pkg/util/uuid"
"github.com/cockroachdb/errors"
"golang.org/x/time/rate"
Expand All @@ -31,6 +32,10 @@ type SSTSnapshotStorage struct {
engine storage.Engine
limiter *rate.Limiter
dir string
mu struct {
syncutil.Mutex
ranges map[roachpb.RangeID]int
}
}

// NewSSTSnapshotStorage creates a new SST snapshot storage.
Expand All @@ -39,6 +44,10 @@ func NewSSTSnapshotStorage(engine storage.Engine, limiter *rate.Limiter) SSTSnap
engine: engine,
limiter: limiter,
dir: filepath.Join(engine.GetAuxiliaryDir(), "sstsnapshot"),
mu: struct {
syncutil.Mutex
ranges map[roachpb.RangeID]int
}{ranges: make(map[roachpb.RangeID]int)},
}
}

Expand All @@ -47,9 +56,16 @@ func NewSSTSnapshotStorage(engine storage.Engine, limiter *rate.Limiter) SSTSnap
func (s *SSTSnapshotStorage) NewScratchSpace(
rangeID roachpb.RangeID, snapUUID uuid.UUID,
) *SSTSnapshotStorageScratch {
s.mu.Lock()
rangeStorage := s.mu.ranges[rangeID]
if rangeStorage == 0 {
s.mu.ranges[rangeID] = 1
}
s.mu.Unlock()
snapDir := filepath.Join(s.dir, strconv.Itoa(int(rangeID)), snapUUID.String())
return &SSTSnapshotStorageScratch{
storage: s,
rangeID: rangeID,
snapDir: snapDir,
}
}
Expand All @@ -59,14 +75,38 @@ func (s *SSTSnapshotStorage) Clear() error {
return s.engine.RemoveAll(s.dir)
}

// scratchClosed is called when an SSTSnapshotStorageScratch created by this
// SSTSnapshotStorage is closed. This method handles any cleanup of range
// directories if all SSTSnapshotStorageScratches corresponding to a range
// have closed.
func (s *SSTSnapshotStorage) scratchClosed(rangeID roachpb.RangeID) {
s.mu.Lock()
defer s.mu.Unlock()
val := s.mu.ranges[rangeID]
if val <= 0 {
panic("inconsistent scratch ref count")
}
val--
s.mu.ranges[rangeID] = val
if val == 0 {
delete(s.mu.ranges, rangeID)
// Suppressing an error here is okay, as orphaned directories are at worst
// a performance issue when we later walk directories in pebble.Capacity()
// but not a correctness issue.
_ = s.engine.RemoveAll(filepath.Join(s.dir, strconv.Itoa(int(rangeID))))
}
}

// SSTSnapshotStorageScratch keeps track of the SST files incrementally created
// when receiving a snapshot. Each scratch is associated with a specific
// snapshot.
type SSTSnapshotStorageScratch struct {
storage *SSTSnapshotStorage
rangeID roachpb.RangeID
ssts []string
snapDir string
dirCreated bool
closed bool
}

func (s *SSTSnapshotStorageScratch) filename(id int) string {
Expand All @@ -87,6 +127,9 @@ func (s *SSTSnapshotStorageScratch) createDir() error {
func (s *SSTSnapshotStorageScratch) NewFile(
ctx context.Context, bytesPerSync int64,
) (*SSTSnapshotStorageFile, error) {
if s.closed {
return nil, errors.AssertionFailedf("SSTSnapshotStorageScratch closed")
}
id := len(s.ssts)
filename := s.filename(id)
s.ssts = append(s.ssts, filename)
Expand All @@ -103,6 +146,9 @@ func (s *SSTSnapshotStorageScratch) NewFile(
// the provided SST when it is finished using it. If the provided SST is empty,
// then no file will be created and nothing will be written.
func (s *SSTSnapshotStorageScratch) WriteSST(ctx context.Context, data []byte) error {
if s.closed {
return errors.AssertionFailedf("SSTSnapshotStorageScratch closed")
}
if len(data) == 0 {
return nil
}
Expand All @@ -129,8 +175,13 @@ func (s *SSTSnapshotStorageScratch) SSTs() []string {
return s.ssts
}

// Clear removes the directory and SSTs created for a particular snapshot.
func (s *SSTSnapshotStorageScratch) Clear() error {
// Close removes the directory and SSTs created for a particular snapshot.
func (s *SSTSnapshotStorageScratch) Close() error {
if s.closed {
return nil
}
s.closed = true
defer s.storage.scratchClosed(s.rangeID)
return s.storage.engine.RemoveAll(s.snapDir)
}

Expand All @@ -157,6 +208,9 @@ func (f *SSTSnapshotStorageFile) ensureFile() error {
return err
}
}
if f.scratch.closed {
return errors.AssertionFailedf("SSTSnapshotStorageScratch closed")
}
var err error
if f.bytesPerSync > 0 {
f.file, err = f.scratch.storage.engine.CreateWithSync(f.filename, int(f.bytesPerSync))
Expand Down
12 changes: 10 additions & 2 deletions pkg/kv/kvserver/replica_sst_snapshot_storage_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ package kvserver
import (
"context"
"io/ioutil"
"path/filepath"
"strconv"
"testing"

"github.com/cockroachdb/cockroach/pkg/kv/kvserver/rditer"
Expand Down Expand Up @@ -93,12 +95,18 @@ func TestSSTSnapshotStorage(t *testing.T) {
_, err = f.Write([]byte("foo"))
require.NoError(t, err)

// Check that Clear removes the directory.
require.NoError(t, scratch.Clear())
// Check that Close removes the snapshot directory as well as the range
// directory.
require.NoError(t, scratch.Close())
_, err = eng.Stat(scratch.snapDir)
if !oserror.IsNotExist(err) {
t.Fatalf("expected %s to not exist", scratch.snapDir)
}
rangeDir := filepath.Join(sstSnapshotStorage.dir, strconv.Itoa(int(scratch.rangeID)))
_, err = eng.Stat(rangeDir)
if !oserror.IsNotExist(err) {
t.Fatalf("expected %s to not exist", rangeDir)
}
require.NoError(t, sstSnapshotStorage.Clear())
_, err = eng.Stat(sstSnapshotStorage.dir)
if !oserror.IsNotExist(err) {
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/store_snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ func (kvSS *kvBatchSnapshotStrategy) Close(ctx context.Context) {
// A failure to clean up the storage is benign except that it will leak
// disk space (which is reclaimed on node restart). It is unexpected
// though, so log a warning.
if err := kvSS.scratch.Clear(); err != nil {
if err := kvSS.scratch.Close(); err != nil {
log.Warningf(ctx, "error closing kvBatchSnapshotStrategy: %v", err)
}
}
Expand Down
16 changes: 16 additions & 0 deletions pkg/sql/logictest/testdata/logic_test/statement_statistics
Original file line number Diff line number Diff line change
Expand Up @@ -400,3 +400,19 @@ SELECT * FROM txn_fingerprint_view

statement ok
COMMIT

statement ok
BEGIN; SELECT count(1) AS wombat1; COMMIT

query T
SELECT metadata->>'querySummary' FROM crdb_internal.statement_statistics WHERE metadata->>'query' LIKE '%wombat1%'
----
SELECT count(_) AS wom...

statement ok
SELECT count(1) AS wombat2

query T
SELECT metadata->>'querySummary' FROM crdb_internal.statement_statistics WHERE metadata->>'query' LIKE '%wombat2%'
----
SELECT count(_) AS wom...
4 changes: 4 additions & 0 deletions pkg/sql/opt/xform/optimizer.go
Original file line number Diff line number Diff line change
Expand Up @@ -980,6 +980,10 @@ func (o *Optimizer) disableRules(probability float64) {
// supports distinct on an empty column set.
int(opt.EliminateDistinctNoColumns),
int(opt.EliminateEnsureDistinctNoColumns),
// TODO(#84191): Needed to remove the same column and direction
// appearing consecutively in ordering columns, which can cause
// incorrect results until #84191 is addressed.
int(opt.SimplifyRootOrdering),
)

for i := opt.RuleName(1); i < opt.NumRuleNames; i++ {
Expand Down
1 change: 1 addition & 0 deletions pkg/sql/sqlstats/ssmemstorage/ss_mem_storage.go
Original file line number Diff line number Diff line change
Expand Up @@ -473,6 +473,7 @@ func (s *stmtStats) mergeStatsLocked(statistics *roachpb.CollectedStatementStati
s.mu.distSQLUsed = statistics.Key.DistSQL
s.mu.fullScan = statistics.Key.FullScan
s.mu.database = statistics.Key.Database
s.mu.querySummary = statistics.Key.QuerySummary
}

// getStatsForStmt retrieves the per-stmt stat object. Regardless of if a valid
Expand Down
3 changes: 0 additions & 3 deletions pkg/ui/workspaces/cluster-ui/jest.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
const path = require("path");
const isBazel = !!process.env.BAZEL_TARGET;


const bazelOnlySettings = {
haste: {
// Platforms that include a POSIX-compatible `find` binary default to using it for test file
Expand All @@ -27,8 +26,6 @@ const bazelOnlySettings = {
};

module.exports = {
haste: isBazel ? {
} : undefined,
moduleFileExtensions: ["ts", "tsx", "js", "jsx", "json", "node"],
moduleNameMapper: {
"\\.(jpg|ico|jpeg|eot|otf|webp|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "identity-obj-proxy",
Expand Down
Loading

0 comments on commit 309e100

Please sign in to comment.