Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: improve heuristics for distributing lookup/index joins when their input is coming from a single node #59014

Open
yuzefovich opened this issue Jan 14, 2021 · 1 comment
Labels
C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team

Comments

@yuzefovich
Copy link
Member

yuzefovich commented Jan 14, 2021

We recently encountered an interesting query plan on a customer support ticket which contained a single index joiner processor (on 20.1.10) taking up noticeable amount of time. The reason for having a single processor was that its input table reader reads only from a single range, so it is planned on the range's leaseholder. In the case when the table reader returns noticeable number of rows (maybe in thousands), the index joiner will take a long time dividing the batches to send and performing the remote reads. If the query ends up being executed often, that single node might become a bottleneck of the cluster with very high CPU utilization.

This issue describes an idea on how we could mitigate this. During the physical planning we could improve our heuristics for distributing the lookup/index joiners so that those are planned on multiple nodes even if their input is on a single node. I imagine that we would have a threshold on the estimated (by the optimizer) number of rows coming from the single table reader, which when crossed would make us plan the distributed lookup/index joins. It's a bit unclear on how many and on which nodes that planning should occur.

Jira issue: CRDB-3346

@yuzefovich yuzefovich added the C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) label Jan 14, 2021
@jlinder jlinder added the T-sql-queries SQL Queries Team label Jun 16, 2021
craig bot pushed a commit that referenced this issue Oct 6, 2021
70648: sql: move a single remote flow to the gateway in some cases r=yuzefovich a=yuzefovich

**sql: show distribution info based on actual physical plan in EXPLAIN**

Previously, the `distribution` info in `EXPLAIN` output was printed
based on the recommendation about the distribution of the plan. For
example, if the plan is determined as "should be distributed", yet
it only contains a single flow on the gateway, we would say that the
plan has "full" distribution.

This commit updates the code to print the distribution based on the
actual physical plan (in the example above it would say "local"),
regardless of the reason - whether it is the recommendation to plan
locally or the data happened to be only on the gateway.

I think it makes more sense this way since now DISTSQL diagram
consisting of a single flow on the gateway more appropriately
corresponds to "local" distribution. Additionally, this change is
motivated by the follow-up commit which will introduce changes to the
physical plan during the plan finalization, and we want to show the
correct distribution in the EXPLAIN output for that too.

Release note: None

**sql: move a single remote flow to the gateway in some cases**

This commit updates the physical planner to move a single remote flow
onto the gateway in some cases, namely when
- the flow contains a processor that might increase the cardinality of
the data flowing through it or that performs the KV work
- we estimate that the whole flow doesn't reduce the cardinality when
compared against the number of rows read by the table readers.
To be conservative, when there is no estimate, we don't apply this
change to the physical plan.

The justification behind this change is the fact that we're pinning the
whole physical planning based on the placement of table readers. If the
plan consists only of a single flow, and the flow is quite expensive,
then with high enough frequency of such flows, the node having the
lease for the ranges of the table readers becomes the hot spot (we have
seen this in practice a few months ago). In such a scenario we might now
choose to run the flow locally to distribute the load on the cluster
better (assuming that the queries are issued against all nodes with
equal frequency).

The EXPLAIN output will correctly say "distribution: local" if the flow
is moved to the gateway.

Informs: #59014.

Release note (bug fix): Some query patterns that previously could cause
a single node to become a hot spot have been fixed so that the load is
evenly distributed across the whole cluster.

71011: cli: add --max-sql-memory flag to `cockroach mt start-sql` r=knz a=jaylim-crl

Previously the `--max-sql-memory` flag wasn't available to the multi-tenancy
start-sql command, even though the feature was already there for other
`start`-related commands.

Release note (cli change): `cockroach mt start-sql` will now support the
`--max-sql-memory` flag to configure maximum SQL memory capacity to store
temporary data.

Release justification: The upcoming Serverless MVP release plans to use a
different value for `--max-sql-memory` instead of the default value of 25%
of container memory. This commit is only a flag change that will only be used
in multi-tenant scenarios, and should have no impact on dedicated customers.

71105: sql: do not collect statistics on virtual columns r=mgartner a=mgartner

PR #68312 intended to update the behavior of  `CREATE STATISTICS` to
prevent statistics collection on virtual computed columns. However, it
failed to account for multi-column statistics and for
`CREATE STATISTICS` statements that explicitly reference virtual
columns. This commit accounts for these two cases.

This prevents internal errors from occuring when the system tries to
collect statistics on `NOT NULL` virtual columns. Virtual column values
are not included in the primary index. So when the statistics
job reads the primary index to sample the virtual column, it assumes the
value is null, which violates the column's `NOT NULL` constraint. This
violation causes an error.

Fixes #71080

Release note (bug fix): A bug has been fixed which caused internal
errors when collecting statistics on tables with virtual computed
columns.

71206: cmd/roachtest: add testLifecycle to hibernateIgnoreList r=ZhouXing19 a=ZhouXing19

Resolves #70482

Add `org.hibernate.userguide.pc.WhereTest.testLifecycle`
to `hibernateIgnoreList21_1`, `hibernateIgnoreList21_2`,
 and `hibernateIgnoreList22_1`.

Release note: None
Release justification: None

71212: opt: use fragment for optstepweb with long URLs r=mgartner a=mgartner

The `optstepsweb` test command can produce very long URLs. If the URL is
longer than ~8201 characters, the GitHub Pages server hosting
`optsteps.html` responds with a 414 status code.

To make these long URLs work, this commit uses a fragment rather than a
query parameter in the URL if the compressed data that represents the
optimizer steps is over 8100 characters (the 100 characters of buffer is
meant to account for the protocol, domain, and path). A fragment is not
sent to the server by the browser, so Github Pages responds
successfully. A downside is that when anchor links are clicked to
navigate the page, the original fragment is overridden and the URL is
invalid. For this reason, we still use a query parameter when the
compressed data is small enough.

Related to #68697.

Release note: None

Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Jay <[email protected]>
Co-authored-by: Marcus Gartner <[email protected]>
Co-authored-by: Jane Xing <[email protected]>
@mgartner mgartner moved this to New Backlog in SQL Queries Jul 24, 2023
@github-actions
Copy link

github-actions bot commented Sep 5, 2023

We have marked this issue as stale because it has been inactive for
18 months. If this issue is still relevant, removing the stale label
or adding a comment will keep it active. Otherwise, we'll close it in
10 days to keep the issue queue tidy. Thank you for your contribution
to CockroachDB!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team
Projects
Status: Backlog
Development

No branches or pull requests

2 participants