Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opt: exec timeout due to excessive exploration during planning #64793

Closed
cockroach-teamcity opened this issue May 6, 2021 · 56 comments
Closed
Assignees
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-queries SQL Queries Team

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented May 6, 2021

sql/tests.TestRandomSyntaxSQLSmith failed with artifacts on master @ dee0558010170d938ead781d82514fd0bb3b0b88:

Random syntax error:

    rsg_test.go:758: Crash detected: server panic: statement exec timeout

Query:

        WITH
        	with_186941 (col_1103773, col_1103774)
        		AS (
        			SELECT
        				*
        			FROM
        				(
        					VALUES
        						(
        							'clvl':::rand_typ_2,
        							st_zmflag('01020000000300000061E7C39F4D2DF4C1093ABD17665CF3C15063C1421B88D5411C9F6E198EDEF3414EEEFB25DF0EF2C1B4850B796774FE41':::GEOMETRY::GEOMETRY)::INT2
        						),
        						(
        							'n':::rand_typ_2,
        							(
        								SELECT
        									tab_455284.col1_6 AS col_1103772
        								FROM
        									defaultdb.public.table1@[0] AS tab_455284
        								ORDER BY
        									tab_455284.col1_2 DESC, tab_455284.col1_1 DESC
        								LIMIT
        									1:::INT8
        							)
        						),
        						(NULL, 6736:::INT8)
        				)
        					AS tab_455285 (col_1103773, col_1103774)
        		),
        	with_186942 (col_1103775) AS (SELECT * FROM (VALUES ('yk':::rand_typ_3), (NULL)) AS tab_455286 (col_1103775))
        SELECT
        	0:::OID AS col_1103776,
        	((-32244820164.24410487):::DECIMAL::DECIMAL + tab_455291.col1_10::INT8)::DECIMAL AS col_1103777,
        	tab_455287._bool AS col_1103778
        FROM
        	with_186942 AS cte_ref_54113,
        	defaultdb.public.seed@[0] AS tab_455287
        	JOIN defaultdb.public.seed AS tab_455288
        		JOIN defaultdb.public.seed AS tab_455289 ON
        				(tab_455288._int8) = (tab_455289._int8)
        				AND (tab_455288._date) = (tab_455289._date)
        				AND (tab_455288._float8) = (tab_455289._float8)
        		JOIN defaultdb.public.table1@[0] AS tab_455290
        			JOIN defaultdb.public.table1@primary AS tab_455291
        				JOIN defaultdb.public.table1@[0] AS tab_455295
        					JOIN defaultdb.public.seed AS tab_455296
        						JOIN defaultdb.public.seed AS tab_455297 ON
        								(tab_455296._int8) = (tab_455297._int8) AND (tab_455296._date) = (tab_455297._date) ON
        							(tab_455295.col1_5) = (tab_455297._float8)
        							AND (tab_455295.col1_5) = (tab_455296._float8)
        							AND (tab_455295.col1_5) = (tab_455297._float8)
        							AND (tab_455295.col1_5) = (tab_455297._float8) ON
        						(tab_455291.col1_2) = (tab_455295.tableoid) AND (tab_455291.col1_7) = (tab_455295.col1_1) ON
        					(tab_455290.col1_2) = (tab_455291.col1_9) AND (tab_455290.col1_7) = (tab_455291.col1_7) ON
        				(tab_455289._float8) = (tab_455296._float8) ON
        			(tab_455287._float4) = (tab_455290.col1_5)
        			AND (tab_455287.tableoid) = (tab_455295.col1_9)
        			AND (tab_455287._bool) = (tab_455295.col1_7);

Schema:

    rsg_test.go:575: To reproduce, use schema:
    rsg_test.go:577: 
        SET CLUSTER SETTING sql.defaults.drop_enum_value.enabled = true;
        SET enable_drop_enum_value = true;
        
        		SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false;
        		SET CLUSTER SETTING sql.stats.histogram_collection.enabled = false;
        		SET CLUSTER SETTING sql.defaults.interleaved_tables.enabled = true;
        	CREATE TABLE table1 (col1_0 CHAR NOT NULL, col1_1 BOOL NOT NULL, col1_2 REGPROC NOT NULL, col1_3 REGPROCEDURE NOT NULL, col1_4 TIMETZ NOT NULL, col1_5 FLOAT8 NULL, col1_6 INT2 NOT NULL, col1_7 BOOL, col1_8 BOX2D NOT NULL, col1_9 REGNAMESPACE NOT NULL, PRIMARY KEY (col1_8 DESC, col1_9 DESC, col1_4 DESC, col1_1, col1_2 ASC, col1_3 DESC, col1_0 DESC, col1_6), col1_10 INT2 NOT NULL AS (col1_6 + 22798:::INT8) VIRTUAL, FAMILY (col1_4), FAMILY (col1_0, col1_5), FAMILY (col1_1), FAMILY (col1_8, col1_3, col1_9, col1_7), FAMILY (col1_2), FAMILY (col1_6));
        ALTER TABLE table1 INJECT STATISTICS '[{"columns": ["col1_1"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_3"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_5"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_6"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_0"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_2"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_4"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_7"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_8"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_buckets": [{"distinct_range": 0, "num_eq": 10000, "num_range": 0, "upper_bound": "BOX(-10 -10,10 10)"}, {"distinct_range": 80000000, "num_eq": 400000, "num_range": 80000000, "upper_bound": "BOX(-2.047411080418742 -0.7244478039064473,0.5089714550246708 0.018071822061174414)"}, {"distinct_range": 0, "num_eq": 8000000000, "num_range": 5989342388077183799, "upper_bound": "BOX(-1.7565984260464864 -0.22461265918951656,0.3378908234793202 0.05434080126933499)"}], "histo_col_type": "BOX2D", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_9"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}, {"columns": ["col1_10"], "created_at": "2000-01-01 00:00:00+00:00", "distinct_count": 0, "histo_col_type": "", "name": "__auto__", "null_count": 0, "row_count": 0}]':::JSONB;
        CREATE TYPE rand_typ_0 AS ENUM ('hqerb', 'pvo', 'yjc', 'q', 'f', 'uo');
        CREATE TYPE rand_typ_1 AS ENUM ('ul', 'uteo', 'r', 'c', 'p');
        CREATE TYPE rand_typ_2 AS ENUM ('kl', 'ls', 'jz', 'clvl', 'hrtt', 'n');
        CREATE TYPE rand_typ_3 AS ENUM ('yk');
        ;
    rsg_test.go:577: 
        SET CLUSTER SETTING sql.defaults.drop_enum_value.enabled = true;
        SET enable_drop_enum_value = true;
        
        CREATE TYPE greeting AS ENUM ('hello', 'howdy', 'hi', 'good day', 'morning');
        CREATE TABLE IF NOT EXISTS seed AS
        	SELECT
        		g::INT2 AS _int2,
        		g::INT4 AS _int4,
        		g::INT8 AS _int8,
        		g::FLOAT4 AS _float4,
        		g::FLOAT8 AS _float8,
        		'2001-01-01'::DATE + g AS _date,
        		'2001-01-01'::TIMESTAMP + g * '1 day'::INTERVAL AS _timestamp,
        		'2001-01-01'::TIMESTAMPTZ + g * '1 day'::INTERVAL AS _timestamptz,
        		g * '1 day'::INTERVAL AS _interval,
        		g % 2 = 1 AS _bool,
        		g::DECIMAL AS _decimal,
        		g::STRING AS _string,
        		g::STRING::BYTES AS _bytes,
        		substring('00000000-0000-0000-0000-' || g::STRING || '00000000000', 1, 36)::UUID AS _uuid,
        		'0.0.0.0'::INET + g AS _inet,
        		g::STRING::JSONB AS _jsonb,
        		enum_range('hello'::greeting)[g] as _enum
        	FROM
        		generate_series(1, 5) AS g;
        
        INSERT INTO seed DEFAULT VALUES;
        CREATE INDEX on seed (_int8, _float8, _date);
        CREATE INVERTED INDEX on seed (_jsonb);
        ;
    rsg_test.go:579: 
    rsg_test.go:580: -- test log scope end --
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestRandomSyntaxSQLSmith052073609
--- FAIL: TestRandomSyntaxSQLSmith (300.50s)
Reproduce

To reproduce, try:

make stressrace TESTS=TestRandomSyntaxSQLSmith PKG=./pkg/sql/tests TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1

Same failure on other branches

Internal log

mjibson marked as alumn{us/a}; resolving to rafiss instead

/cc @cockroachdb/sql-queries @rafiss

This test on roachdash | Improve this report!

Jira issue: CRDB-7250

@cockroach-teamcity cockroach-teamcity added branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels May 6, 2021
@rytaft
Copy link
Collaborator

rytaft commented May 6, 2021

I ran the above query in cockroach demo and it returned successfully after 73 seconds. Although it didn't time out, this seems unacceptably long. This is pretty much all planning time, since running EXPLAIN took just as long. I was planning to look into avoiding out-of-control exploration this milestone anyway, so I'll take this one.

Here's the explain output:

                                                         info
-----------------------------------------------------------------------------------------------------------------------
  distribution: local
  vectorized: true

  • render
  │
  └── • cross join
      │
      ├── • values
      │     size: 0 columns, 2 rows
      │
      └── • render
          │
          └── • hash join
              │ equality: (col1_2, col1_7, col1_9) = (tableoid, col1_1, col1_2)
              │
              ├── • scan
              │     estimated row count: 1 (100% of the table; stats collected 7797 days ago)
              │     table: table1@primary
              │     spans: FULL SCAN
              │
              └── • lookup join
                  │ table: seed@seed__int8__float8__date_idx
                  │ equality: (_int8, col1_5, _date) = (_int8,_float8,_date)
                  │
                  └── • hash join
                      │ equality: (_float8) = (_float8)
                      │
                      ├── • scan
                      │     missing stats
                      │     table: seed@seed__int8__float8__date_idx
                      │     spans: FULL SCAN
                      │
                      └── • lookup join
                          │ table: seed@seed__int8__float8__date_idx
                          │ equality: (_int8, col1_5, _date) = (_int8,_float8,_date)
                          │
                          └── • hash join
                              │ equality: (_float8) = (col1_5)
                              │
                              ├── • scan
                              │     missing stats
                              │     table: seed@seed__int8__float8__date_idx
                              │     spans: FULL SCAN
                              │
                              └── • hash join
                                  │ equality: (_float4, tableoid, _bool) = (col1_5, col1_9, col1_7)
                                  │
                                  ├── • scan
                                  │     missing stats
                                  │     table: seed@primary
                                  │     spans: FULL SCAN
                                  │
                                  └── • hash join
                                      │ estimated row count: 1
                                      │ equality: (col1_7) = (col1_1)
                                      │
                                      ├── • scan
                                      │     estimated row count: 1 (100% of the table; stats collected 7797 days ago)
                                      │     table: table1@primary
                                      │     spans: FULL SCAN
                                      │
                                      └── • scan
                                            estimated row count: 1 (100% of the table; stats collected 7797 days ago)
                                            table: table1@primary
                                            spans: FULL SCAN
(65 rows)

Time: 73.718s total (execution 73.716s / network 0.001s)

@rytaft rytaft self-assigned this May 6, 2021
@cockroach-teamcity

This comment was marked as off-topic.

@cockroach-teamcity

This comment was marked as off-topic.

@mgartner
Copy link
Collaborator

One of the errors above appears unrelated to the original timeout: internal error: ordering column group (86,107) contains non-equivalent columns (op sort). I've created #65038 to track separately.

@mgartner
Copy link
Collaborator

Another error above appears unrelated to the original: internal error: unsupported comparison: inet to timestamptz. I've created #65040 to track separately.

@cockroach-teamcity

This comment was marked as off-topic.

@cockroach-teamcity

This comment was marked as outdated.

@cockroach-teamcity

This comment was marked as off-topic.

@cockroach-teamcity

This comment was marked as off-topic.

@rafiss

This comment has been minimized.

@RaduBerinde

This comment has been minimized.

@rafiss

This comment has been minimized.

@mgartner
Copy link
Collaborator

#65547 only fixed a specific case when a query had a LIMIT 1, so this might be different. I'll see if it's still reproducible.

@mgartner
Copy link
Collaborator

It looks like #65547 did fix the last failure. So the only remaining issues above are the timeouts.

@cockroach-teamcity

This comment has been minimized.

@yuzefovich
Copy link
Member

The last one is tracked in #65763.

@cockroach-teamcity

This comment has been minimized.

@cockroach-teamcity

This comment was marked as off-topic.

@cockroach-teamcity

This comment was marked as resolved.

@cockroach-teamcity

This comment was marked as resolved.

@jordanlewis
Copy link
Member

I assigned @yuzefovich because I figured Radu was just assigned at random, but if that was on purpose sorry and please ignore!

@yuzefovich
Copy link
Member

Based on this comment Radu was assigned to look into some stuff, but the latest several occurrences are because of minor omission in the case operator refactor (which I'll address shortly), so I'm assigning back to Radu.

@yuzefovich yuzefovich assigned RaduBerinde and unassigned yuzefovich Jul 19, 2021
craig bot pushed a commit that referenced this issue Jul 19, 2021
67757: colexec: fix CASE operator a bit r=yuzefovich a=yuzefovich

Whenever we're updating the length on the batch containing bytes-like
vectors, we are updating those vectors to have non-decreasing offsets.
In case the batch has a selection vector set, we're using the largest
index in the selection to update the offsets. This logic relies on the
assumption that the selection vector is set on the batch **before**
setting the length which wasn't the case in a couple of places.

In particular, the recent refactor of the case operator added the
support of bytes-like types as the output of the case operator, but the
existing code was setting the length on the batch first. This is now
fixed.

Addresses: #64793 (comment).
Fixes: #67744.

Release note: None

Co-authored-by: Yahor Yuzefovich <[email protected]>
@cockroach-teamcity

This comment has been minimized.

@cockroach-teamcity

This comment was marked as outdated.

@cockroach-teamcity

This comment was marked as off-topic.

@cockroach-teamcity

This comment was marked as outdated.

@cockroach-teamcity

This comment was marked as resolved.

@rytaft rytaft changed the title sql/tests: TestRandomSyntaxSQLSmith failed opt: exec timeout due to excessive exploration during planning Aug 30, 2021
@rytaft
Copy link
Collaborator

rytaft commented Aug 30, 2021

@RaduBerinde I'm guessing the limiting of excessive exploration won't make it into 21.2? Should we move this issue to the backlog / low likelihood bucket?

@RaduBerinde
Copy link
Member

Yeah, moved it to backlog.

DrewKimball added a commit to DrewKimball/cockroach that referenced this issue Jul 26, 2022
This commit adds two slow-planning queries pulled from cockroachdb#64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

Release note: None
DrewKimball added a commit to DrewKimball/cockroach that referenced this issue Jul 28, 2022
This commit adds two slow-planning queries pulled from cockroachdb#64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

Release note: None
DrewKimball added a commit to DrewKimball/cockroach that referenced this issue Jul 29, 2022
This commit adds two slow-planning queries pulled from cockroachdb#64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

Release note: None
DrewKimball added a commit to DrewKimball/cockroach that referenced this issue Jul 29, 2022
This commit adds two slow-planning queries pulled from cockroachdb#64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

Release note: None
craig bot pushed a commit that referenced this issue Jul 29, 2022
…85329

84975: storage: add `MVCCRangeKeyStack` for range keys r=nicktrav,jbowens a=erikgrinaker

**storage: add `MVCCRangeKeyStack` for range keys**

This patch adds `MVCCRangeKeyStack` and `MVCCRangeKeyVersion`, a new
range key representation that will be returned by `SimpleMVCCIterator`.
It is more compact, for efficiency, and comes with a set of convenience
methods to simplify common range key processing.

Resolves #83895.

Release note: None
  
**storage: return `MVCCRangeKeyStack` from `SimpleMVCCIterator`**

This patch changes `SimpleMVCCIterator.RangeKeys()` to return
`MVCCRangeKeyStack` instead of `[]MVCCRangeKeyValue`. Callers have not
been migrated to properly make use of this -- instead, they call
`AsRangeKeyValues()` and construct and use the old data structure.

The MVCC range tombstones tech note is also updated to reflect this.

Release note: None
  
**storage: migrate MVCC code to `MVCCRangeKeyStack`**

Release note: None
  
***: migrate higher-level code to `MVCCRangeKeyStack`**

Release note: None
  
**kvserver/gc: partially migrate to `MVCCRangeKeyStack`**

Some parts require invasive changes to MVCC stats helpers. These will
shortly be consolidated with other MVCC stats logic elsewhere, so the
existing logic is retained for now by using `AsRangeKeyValues()`.

Release note: None
  
**storage: remove `FirstRangeKeyAbove()` and `HasRangeKeyBetween()`**

Release note: None

85017: Revert "sql: Add database ID to sampled query log" r=THardy98 a=THardy98

Reverts: #84195
This reverts commit 307817e.

Removes the DatabaseID field from the
`SampledQuery` telemetry log due to the potential of indefinite blocking
in the case of a lease acquisition failure. Protobuf field not reserved as 
no official build was released with these changes yet.

Release note (sql change): Removes the DatabaseID field from the
`SampledQuery` telemetry log due to the potential of indefinite blocking
in the case of a lease acquisition failure.

85024: cloud/gcp: add custom retryer for gcs storage, retry on stream INTERNAL_ERROR r=rhu713 a=rhu713

Currently, errors like
`stream error: stream ID <x>; INTERNAL_ERROR; received from peer`
are not being retried. Create a custom retryer to retry these errors as
suggested by:

googleapis/google-cloud-go#3735
googleapis/google-cloud-go#784

Fixes: #85217, #85216, #85204, #84162

Release note: None


85069: optbuilder: handle unnest returning a tuple r=DrewKimball a=DrewKimball

Currently, the return types of SRFs that return multiple columns are
represented as tuples with labels. The tuple labels are used to decide
whether or not to create a single output column for the SRF, or multiple.
The `unnest` function can return a single column if it has a single argument,
and the type of that column can be a tuple with labels. This could cause the
old logic to mistakenly create multiple output columns for `unnest`, which
could lead to panics down the line and incorrect behavior otherwise.

This commit adds a special case for `unnest` in the `optbuilder` to only expand
tuple return types if there is more than one argument (implying more than one
output column). Other SRFs do not have the same problem because they either
always return the same number of columns, cannot return tuples, or both.

Fixes #58438

Release note (bug fix): Fixed a bug existing since release 20.1 that could
cause a panic in rare cases when the unnest function was used with a
tuple return type.

85100: opt: perf improvements for large queries r=DrewKimball a=DrewKimball

**opt: add bench test for slow queries**

This commit adds two slow-planning queries pulled from #64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

**opt: add struct for tracking column equivalence sets**

Previously, the `JoinOrderBuilder` would construct a `FuncDepSet` from
scratch on each call to `addJoins` in order to eliminate redundant join
filters. This led to unnecessary large allocations because `addJoins` is
called an exponential number of times in query size.

This commit adds a struct `EquivSet` that efficiently stores equivalence
relations as `ColSets` in a slice. Rather than being constructed on each
call to `addJoins`, a `Reset` method is called that maintains slice memory.

In the future, `EquivSet` can be used to handle equivalencies within `FuncDepSet`
structs as well. This well avoid a significant number of allocations in cases with
many equivalent columns, as outlined in #83963.

**opt: avoid usage of FastIntMap in optimizer hot paths**

Previously, `computeHashJoinCost` would use a `FastIntMap` to represent join
equality filters to pass to `computeFiltersCost`. In addition,
`GenerateMergeJoins` used a `FastIntMap` to look up columns among its join
equality columns. This lead to unnecessary allocations since column IDs are
often large enough to exceed the small field of `FastIntMap`.

This commit modifies `computeFiltersCost` to take an anonymous function
that is used to decide whether to skip an equality condition, removing the
need for a mapping between columns.

This commit also refactors `GenerateMergeJoins` to simply perform a linear
scan of its equality columns; this avoids the allocation issue, and should be
fast in practice because the number of equalities will not generally be large.

Release note: None

85146: [backupccl] Use Expr for backup's Detached and Revision History options r=benbardin a=benbardin

This will allow us to set them to null, which will be helpful for ALTER commands.

Release note: None

85234: dev: add rewritable paths for norm tests r=mgartner a=mgartner

Tests in `pkg/sql/opt/norm` are similar to tests in `pkg/sql/opt/xform`
and `pkg/sql/opt/memo` in that they rely on fixtures in
`pkg/sql/opt/testutils/opttester/testfixtures`. This commit adds these
fixtures as rewritable paths for norm tests so that
`./dev test pkg/sql/opt/xform --rewrite` does not fail with errors like:

    open pkg/sql/opt/testutils/opttester/testfixtures/tpcc_schema: operation not permitted

Release note: None

85325: sql: fix explain gist output to show number of scan span constraints r=cucaroach a=cucaroach

If there were span constraints we would always print 1, need to actually
append them to get the count right.

Fixes: #85324

Release note: None


85327: sql: fix udf logic test r=chengxiong-ruan a=chengxiong-ruan

Fixes: #85303

Release note: None

85329: colexec: fix recent concat fix r=yuzefovich a=yuzefovich

The recent fix of the Concat operator in the vectorized engine doesn't
handle the array concatenation correctly and this is now fixed.

Fixes: #85295.

Release note: None

Co-authored-by: Erik Grinaker <[email protected]>
Co-authored-by: Thomas Hardy <[email protected]>
Co-authored-by: Rui Hu <[email protected]>
Co-authored-by: DrewKimball <[email protected]>
Co-authored-by: Andrew Kimball <[email protected]>
Co-authored-by: Ben Bardin <[email protected]>
Co-authored-by: Marcus Gartner <[email protected]>
Co-authored-by: Tommy Reilly <[email protected]>
Co-authored-by: Chengxiong Ruan <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
@yuzefovich
Copy link
Member

I randomly stumbled across this issue. @rytaft do you think we should close this given that we now have some cancellation checks in the optimizer? Also, the original repro now is instant, and I looked over all repros, and none of them took long to plan.

@rytaft
Copy link
Collaborator

rytaft commented Aug 18, 2022

Sounds good to me! There is more work we can do to avoid excessive exploration in the optimizer (to avoid canceling the query altogether and still allow a slightly suboptimal plan to run), but I don't think we need to keep this issue open to track it, since the original repro is fast now. We are tracking the work to avoid excessive exploration in Jira Epic CRDB-10032.

@rytaft rytaft closed this as completed Aug 18, 2022
rytaft pushed a commit to rytaft/cockroach that referenced this issue Aug 24, 2022
This commit adds two slow-planning queries pulled from cockroachdb#64793 to be used
in benchmarking the optimizer. In addition, the `ReorderJoinsLimit` has been
set to the default 8 for benchmarking tests.

Release note: None
@mgartner mgartner moved this to Done in SQL Queries Jul 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-queries SQL Queries Team
Projects
Archived in project
Development

No branches or pull requests

10 participants