-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql: make all processor cores vectorize-able #57268
Comments
We have worked scheduled this release to migrate a lot of the "bulk processors" to use ProcessorBase and therefore implement
This is indeed the case. |
55909: sql,colexec: add support for wrapping LocalPlanNode r=yuzefovich a=yuzefovich **tree: unwrap DOidWrapper before casting** When performing a cast from `DOidWrapper`, we're interested in casting the wrapped datum, so this commit adds the unwrapping to the top of `performCast`. Release note: None **colexec: introduce streaming mode to the columnarizer** Previously, the columnarizer would always buffer up tuples (dynamically, up to `coldata.BatchSize()`) before emitting them as output; however, there are several processors which are of "streaming" nature and such buffering approach is not compatible when wrapping them. This commit introduces a streaming mode of operation to the columnarizer which is used when wrapping a streaming processor (newly introduced marker interface that currently is implemented by the `planNodeToRowSource` and the changefeed processors). Note that there are still some unresolved issues around the changefeed processors, so they aren't being wrapped into the vectorized flows. Release note: None **colexec: add support for wrapping LocalPlanNode** This commit adds the support for wrapping `LocalPlanNode` core. It also changes the meaning of `experimental_always` to allow wrapping of JoinReader and LocalPlanNode cores. The latter allows us to remove VectorizeAlwaysException. Notable change is that now the materializer created when wrapping row-execution processors are no longer added to the set of releasables at the flow cleanup because in some cases it could be released before being closed. Namely, this would occur if we have a subquery with LocalPlanNode core and a materializer is added in order to wrap that core - what will happen is that all releasables are put back into their pools upon the subquery's flow cleanup, yet the subquery planNode tree isn't closed yet since its closure is down when the main planNode tree is being closed. Another change is fixing the tracing in the cFetcher during mutations (we were accessing public only columns whereas we should have been accessing deletable ones - that's what row.Fetcher does in two places that needed a fix). Addresses: #57268. Release note: None 57950: roachtest: enable vectorized panic injection in sqlsmith r=yuzefovich a=yuzefovich Fixes: #57880 Release note: None 57989: bench/ddl_analysis: skip flakey test r=knz a=ajwerner Touches #57771. Release note: None Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: Andrew Werner <[email protected]>
60630: roachtest: run more queries in tpcdsvec r=yuzefovich a=yuzefovich Several queries in `tpcdsvec` have been skipped because they included `LocalPlanNode` processors in the plan which, until recently, couldn't be wrapped into the vectorized flow. As a result, there was no point in running in those queries with different vectorize settings, but now we can wrap such processors, so this commit unskips almost all queries in question. Additionally, since we now use the roachtests from the corresponding branch on old releases, we no longer need to keep additional skip lists for old branches, so this commit cleans up the code a bit too. Release note: None 60642: colbuilder: allow wrapping of more processors r=yuzefovich a=yuzefovich Several processors have recently been refactored or introduced, and all of them implement `RowSource` interface (by embedding `ProcessorBase`), so all of them can be wrapped into the vectorized flow. Addresses: #57268. Release note: None 60644: kvserver: small improvement to propBuf tests r=andreimatei a=andreimatei Assign a context to test proposals. High-verbosity logs freak out if this field is nil. Release note: None Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: Andrei Matei <[email protected]>
…83526 #83533 #83548 #83559 #83576 #83588 57838: execinfra: remove MetadataTest* processors r=yuzefovich a=yuzefovich This commit removes `MetadataTestSender` and `MetadataTestReceiver` processors since they no longer provide much value. I believe they were introduced when we added a `ProducerMetadata` as a return parameter to `Next` method in order to ensure that at least some artificial metadata is propagated correctly throughout the whole flow. The main goal of this commit is the removal of `fakedist-metadata` and `5node-metadata` logic test configs in order to speed up the CI time. The justification for removal of these processors without putting anything in their place is that these processors are not that useful - the only thing they can test is that *some* metadata is propagated through the row-based flows. Note that they don't test whether all necessary metadata is emitted (for example, whether `LeafTxnFinalState`). We've been using the vectorized engine as the default for a while now, and these processors don't get planned with the vectorized flows. Thus, it seems silly to have a logic test config like `fakedist-metadata` that is part of the default configs. Addresses: #57268. Release note: None 78104: kvserver: include MVCC range keys in replica consistency checks r=aliher1911 a=erikgrinaker This patch adds handling of MVCC range keys in replica consistency checks. These are iterated over as part of `MVCCStats` calculations and hashed similarly to point keys. Range keys will only exist after the version gate `ExperimentalMVCCRangeTombstones` has been enabled, so a separate version gate is not necessary. Release note: None 81880: changefeedccl: fix changefeed telemetry resolved r=samiskin a=samiskin Resolves #81599 Our internal-use changefeed create / failed telemetry events would incorrectly record "yes" for any non-no value of resolved, rather than using the value that was passed in. This change resolves that, emitting yes for "resolved" and the value itself for "resolved=<value>". Release note: None 82686: sql: support ttl_expiration_expression for row-level TTL r=otan a=ecwall refs #76916 Release note (sql change): Allow `CREATE TABLE ... WITH (ttl_expiration_expression='...')`. Allow `ALTER TABLE ... SET (ttl_expiration_expression='...')` and `ALTER TABLE ... RESET (ttl_expiration_expression)`. ttl_expiration_expression accepts an expression that returns a timestamp to support custom TTL calculation. 82885: streamingccl: minor error style cleanups r=miretskiy,HonoreDB a=stevendanna Capitalized error messages are rare in the code base, so I've made these more consistent with the rest of the code base. Release note: None 83004: roachprod: add prometheus/grafana monitoring r=irfansharif a=msbutler Previously, only roachtests could spin up prom/grafana servers that lasted the lifetime of the roachtest. This PR introduces new roachprod cmds that allow a roachprod user to easily spin up/down their own prom/grafana instances. The PR also hooks up roachtests that rely on prom/grafana into this new infrastructure. Release note: none 83027: sql: add plan gist to sampled query telemetry log r=THardy98 a=THardy98 Partially resolves: #71328 This change adds a plan gist field to the sampled query telemetry log. The plan gist is written as a base64 encoded string. Release note (sql change): The sampled query telemetry log now includes a plan gist field. The plan gist field provides a compact representation of a logical plan for the sampled query, the field is written as a base64 encoded string. 83445: roachtest: skip decommissionBench/nodes=8/cpu=16/warehouses=3000 r=erikgrinaker a=tbg #82870 Release note: None 83505: streamingccl: deflake TestPartitionedStreamReplicationClient r=miretskiy a=stevendanna Previously, this test would fail occasionally with: ``` partitioned_stream_client_test.go:192: Error Trace: partitioned_stream_client_test.go:192 Error: Target error should be in err chain: expected: "context canceled" in chain: "pq: query execution canceled" Test: TestPartitionedStreamReplicationClient ``` This is a result of the lib/pq library asyncronously sending a CancelRequest when it notices the context cancellation. The cancel request may result in the query ending before the local context cancellation produces an error. We now count this query cancellation error as an acceptable response since the goal of the test is to assert that we respond to context cancellation. Fixes #76919 Release note: None 83526: backupccl: create tree.SystemUsers, a new DescriptorCoverage enum r=adityamaru a=msbutler Previously during planning and execution RESTORE SYSTEM USERS was identified by a `jobDetails` field. This refactor now identifies this flavor of restore with a new DescriptorCoverage enum value, `tree.SystemUsers. This refactor eases the logic around exposing extra processing steps for flavors of backup/restore that target different sets of descriptors. Release note: None 83533: amazon: add custom retryer to retry on `read: connection reset` r=miretskiy,stevendanna a=adityamaru This change implements a custom retryer that we use when initializing a new s3 client for interaction with the external storage sink. This change was motivated by the increased number of backup job failures we were observing with a `read: connection reset` error being thrown by s3. A read connection reset error is thrown when the SDK is unable to read the response of an underlying API request due to a connection reset. The DefaultRetryer in the AWS SDK does not treat this error as a retryable error since the SDK does not have knowledge about the idempotence of the request, and whether it is safe to retry - aws/aws-sdk-go#2926 (comment). In CRDB all operations with s3 (read, write, list) are considered idempotent, and so we can treat the read connection reset error as retryable too. Release note (bug fix): Retry s3 operations when they error out with a read connection reset error instead of failing the top level job. 83548: changefeedccl: Support more stable functions. r=miretskiy a=miretskiy Add support for additional stable functions to CDC expressions. Fixes #83466 Release Notes: None 83559: sql: ensure that the plan is closed in apply joins in some error cases r=yuzefovich a=yuzefovich Previously, it was possible for the apply join's plan to be left unclosed when an error is encountered during the physical planning of the main query, and this has now been fixed. We do so by explicitly closing the plan in such a scenario. Fixes: #82705. Fixes: #83368. Release note: None 83576: streamingccl: re-enabled TestRandomClientGeneration r=miretskiy a=stevendanna TestRandomClientGeneration was skipped in #61292 as a flake. However, in the time since then, other changes in this code broke this test more completely. Re-enabling the test required a few unrelated changes: - The stream ingestion processor required a fully formed job to be able to poll the cutover time. Now, test code can set a cutoverProvider that doesn't depend on a full job record. - The request intercepting depended on an explicit client being set. This test was rather passing the processor a randgen URI. Now we pass the client explicitly and also update the test code to make it clear that the stream URI isn't actually used for anything. - The code was attempting to validate the number of rows using SQL. I haven't dug into how this was working in the past. But as we are connecting to the host tenant and the keys are being ingested to a guest tenant, we would need a connection to the guest tenant to validate the table data. I've simply removed this assertion since I don't think it was testing very much compared to the KV level assertions also used in the test. - The test code assumed that the partitions were keyed based on the subscription token rather than the subscription ID. It isn't clear what the original source of the flakiness was. However, the test has run a few hundred times under stress without issue. Alternatively, we could just delete this test. Fixes #61287 Release note: None 83588: tree: DROP ROLE should not implement CCLOnlyStatement r=rafiss a=rafiss This was moved out of CCL licensing a few releases ago. Release note: None Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: Erik Grinaker <[email protected]> Co-authored-by: Shiranka Miskin <[email protected]> Co-authored-by: Evan Wall <[email protected]> Co-authored-by: Steven Danna <[email protected]> Co-authored-by: Michael Butler <[email protected]> Co-authored-by: Thomas Hardy <[email protected]> Co-authored-by: Tobias Grieger <[email protected]> Co-authored-by: Aditya Maru <[email protected]> Co-authored-by: Yevgeniy Miretskiy <[email protected]> Co-authored-by: Rafi Shamim <[email protected]>
We have marked this issue as stale because it has been inactive for |
I don't think we want to do this anymore, so closing as won't fix. |
Currently, we have native support for or can wrap many processor core types; however, there are some exceptions that prohibit the vectorization of the whole flow if the "bad" core is present in the physical plan. This issue tracks addressing all of those.
LocalPlanNode
MetadataTestSender
andMetadataTestReceiver
ChangeAggregator
andChangeFrontier
InvertedFilterer
EncDatum.encoded
field. opt: change the key column of inverted indexes to use a new column id and type #50695 was the original issue filed for it, it is now closed, yet I think the problem is still present.Backfiller
,CSVWriter
,Sampler
,SampleAggregator
execinfra.RowSource
interface, so they cannot be wrapped. They, however, use the DistSQL framework but are never - I think - planned in the flows containing other supported processors. Stats operators work is tracked in sql: implement vectorized operator for table statistics collection #41203.Jira issue: CRDB-2842
The text was updated successfully, but these errors were encountered: