diff --git a/_config_base.yml b/_config_base.yml index f3277939214..cbc1cbb8121 100644 --- a/_config_base.yml +++ b/_config_base.yml @@ -138,12 +138,12 @@ release_info: start_time: 2022-10-13 17:45:03.909127 +0000 UTC version: v21.2.17 v22.1: - build_time: 2023-04-24 00:00:00 (go1.19) + build_time: 2023-05-12 00:00:00 (go1.19) crdb_branch_name: release-22.1 docker_image: cockroachdb/cockroach - name: v22.1.19 - start_time: 2023-04-24 14:56:42.617860 +0000 UTC - version: v22.1.19 + name: v22.1.20 + start_time: 2023-05-10 12:58:20.269520 +0000 UTC + version: v22.1.20 v22.2: build_time: 2023-05-08 00:00:00 (go1.19) crdb_branch_name: release-22.2 diff --git a/_data/releases.yml b/_data/releases.yml index 88098cbfa48..cfb2ecec35d 100644 --- a/_data/releases.yml +++ b/_data/releases.yml @@ -4333,4 +4333,25 @@ docker_image: cockroachdb/cockroach docker_arm: true source: true - previous_release: v23.1.0-rc.2 \ No newline at end of file + previous_release: v23.1.0-rc.2 + +- release_name: v22.1.20 + major_version: v22.1 + release_date: '2023-05-12' + release_type: Production + go_version: go1.19 + sha: c091e9bdfdff6fd6888a2c514b78e57abbb6119d + has_sql_only: true + has_sha256sum: true + mac: + mac_arm: false + windows: true + linux: + linux_arm: false + linux_intel_fips: false + linux_arm_fips: false + docker: + docker_image: cockroachdb/cockroach + docker_arm: false + source: true + previous_release: v22.1.19 diff --git a/_includes/releases/cloud/2023-05-10.md b/_includes/releases/cloud/2023-05-10.md new file mode 100644 index 00000000000..e323354223c --- /dev/null +++ b/_includes/releases/cloud/2023-05-10.md @@ -0,0 +1,5 @@ +## May 10, 2023 + +
TSQUERY
",
+ "urls": [
+ "/${VERSION}/tsquery.html"
+ ]
+ },
+ {
+ "title": "TSVECTOR
",
+ "urls": [
+ "/${VERSION}/tsvector.html"
+ ]
+ },
{
"title": "UUID
",
"urls": [
diff --git a/_includes/v23.1/sql/crdb-internal-is-not-supported-for-production-use.md b/_includes/v23.1/sql/crdb-internal-is-not-supported-for-production-use.md
new file mode 100644
index 00000000000..59f0764e51a
--- /dev/null
+++ b/_includes/v23.1/sql/crdb-internal-is-not-supported-for-production-use.md
@@ -0,0 +1 @@
+Many of the tables in the `crdb_internal` system catalog are **not supported for external use in production**. This output is provided **as a debugging aid only**. The output of particular `crdb_internal` facilities may change from patch release to patch release without advance warning. For more information, see [the `crdb_internal` documentation](crdb-internal.html).
diff --git a/_includes/v23.1/sql/select-for-update-overview.md b/_includes/v23.1/sql/select-for-update-overview.md
index ed5a90ac36a..cf545a03721 100644
--- a/_includes/v23.1/sql/select-for-update-overview.md
+++ b/_includes/v23.1/sql/select-for-update-overview.md
@@ -6,7 +6,7 @@ Because this queueing happens during the read operation, the [thrashing](https:/
As a result, using `SELECT FOR UPDATE` leads to increased throughput and decreased tail latency for contended operations.
-Note that using `SELECT FOR UPDATE` does not completely eliminate the chance of [serialization errors](transaction-retry-error-reference.html), which use the `SQLSTATE` error code `40001`, and emit error messages with the string `restart transaction`. These errors can also arise due to [time uncertainty](architecture/transaction-layer.html#transaction-conflicts). To eliminate the need for application-level retry logic, in addition to `SELECT FOR UPDATE` your application also needs to use a [driver that implements automatic retry handling](transactions.html#client-side-intervention).
+Note that using `SELECT FOR UPDATE` does not completely eliminate the chance of [serialization errors](transaction-retry-error-reference.html), which use the `SQLSTATE` error code `40001`, and emit error messages with the string `restart transaction`. These errors can also arise due to [time uncertainty](architecture/transaction-layer.html#transaction-conflicts). To eliminate the need for application-level retry logic, in addition to `SELECT FOR UPDATE` your application also needs to use a [driver that implements automatic retry handling](transaction-retry-error-reference.html#client-side-retry-handling).
CockroachDB does not support the `FOR SHARE` or `FOR KEY SHARE` [locking strengths](select-for-update.html#locking-strengths).
diff --git a/_includes/v23.1/sql/show-ranges-output-deprecation-notice.md b/_includes/v23.1/sql/show-ranges-output-deprecation-notice.md
new file mode 100644
index 00000000000..55696660dfb
--- /dev/null
+++ b/_includes/v23.1/sql/show-ranges-output-deprecation-notice.md
@@ -0,0 +1,16 @@
+The statement syntax and output documented on this page use the updated `SHOW RANGES` that **will become the default in CockroachDB v23.2**. To enable this syntax and output, set the [cluster setting `sql.show_ranges_deprecated_behavior.enabled`](cluster-settings.html#setting-sql-show-ranges-deprecated-behavior-enabled) to `false`:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SET CLUSTER SETTING sql.show_ranges_deprecated_behavior.enabled = false;
+~~~
+
+The pre-v23.1 output of `SHOW RANGES` is deprecated in v23.1 **and will be removed in v23.2**. To view the documentation for the deprecated version of the `SHOW RANGES` statement, see [`SHOW RANGES` (v22.2)](../v22.2/show-ranges.html).
+
+When you use the deprecated version of the `SHOW RANGES` statement, the following message will appear, reminding you to update [the cluster setting](cluster-settings.html#setting-sql-show-ranges-deprecated-behavior-enabled):
+
+~~~
+NOTICE: attention! the pre-23.1 behavior of SHOW RANGES and crdb_internal.ranges{,_no_leases} is deprecated!
+HINT: Consider enabling the new functionality by setting 'sql.show_ranges_deprecated_behavior.enabled' to 'false'.
+The new SHOW RANGES statement has more options. Refer to the online documentation or execute 'SHOW RANGES ??' for details.
+~~~
diff --git a/_includes/v23.1/sql/unsupported-postgres-features.md b/_includes/v23.1/sql/unsupported-postgres-features.md
index 3838cd2e5ee..59fb0f240b1 100644
--- a/_includes/v23.1/sql/unsupported-postgres-features.md
+++ b/_includes/v23.1/sql/unsupported-postgres-features.md
@@ -2,8 +2,6 @@
- CockroachDB has support for [user-defined functions](user-defined-functions.html).
- Triggers.
- Events.
-- `FULLTEXT` functions and indexes.
- - Depending on your use case, you may be able to get by using [trigram indexes](trigram-indexes.html) to do fuzzy string matching and pattern matching.
- Drop primary key.
{{site.data.alerts.callout_info}}
diff --git a/_includes/v23.1/sql/use-case-trigram-indexes.md b/_includes/v23.1/sql/use-case-trigram-indexes.md
new file mode 100644
index 00000000000..b3c55364634
--- /dev/null
+++ b/_includes/v23.1/sql/use-case-trigram-indexes.md
@@ -0,0 +1 @@
+Depending on your use case, you may prefer to use [trigram indexes](trigram-indexes.html) to do fuzzy string matching and pattern matching. For more information about use cases for trigram indexes that could make having full-text search unnecessary, see the 2022 blog post [Use cases for trigram indexes (when not to use Full Text Search)](https://www.cockroachlabs.com/blog/use-cases-trigram-indexes/).
\ No newline at end of file
diff --git a/_includes/v23.1/ui/active-transaction-executions.md b/_includes/v23.1/ui/active-transaction-executions.md
index 6eab121de0f..e6a55112c47 100644
--- a/_includes/v23.1/ui/active-transaction-executions.md
+++ b/_includes/v23.1/ui/active-transaction-executions.md
@@ -33,7 +33,7 @@ The transaction execution details page provides the following details on the tra
- **Most Recent Statement Execution ID**: Link to the ID of the most recently [executed statement](ui-statements-page.html#active-executions-table) in the transaction.
- **Session ID**: Link to the ID of the [session](ui-sessions-page.html) in which the transaction is running.
-If a transaction execution is waiting, the transaction execution details are followed by Contention Insights and details of the transaction execution on which the blocked transaction execution is waiting. For more information about contention, see [Understanding and avoiding transaction contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).
+If a transaction execution is waiting, the transaction execution details are followed by Contention Insights and details of the transaction execution on which the blocked transaction execution is waiting. For more information about contention, see [Transaction contention]({{ link_prefix }}performance-best-practices-overview.html#transaction-contention).
diff --git a/_includes/v23.1/ui/insights.md b/_includes/v23.1/ui/insights.md
index fe752f4b848..ad84fe59643 100644
--- a/_includes/v23.1/ui/insights.md
+++ b/_includes/v23.1/ui/insights.md
@@ -21,7 +21,7 @@ The rows in this page are populated from the [`crdb_internal.transaction_content
- The default tracing behavior captures a small percent of transactions so not all contention events will be recorded. When investigating [transaction contention]({{ link_prefix }}performance-best-practices-overview.html#transaction-contention), you can set the [`sql.trace.txn.enable_threshold` cluster setting]({{ link_prefix }}cluster-settings.html#setting-sql-trace-txn-enable-threshold) to always capture contention events.
{{site.data.alerts.end}}
-Transaction executions with the **High Contention** insight are transactions that experienced [contention]({{ link_prefix }}transactions.html#transaction-contention).
+Transaction executions with the **High Contention** insight are transactions that experienced [contention]({{ link_prefix }}performance-best-practices-overview.html#transaction-contention).
{% if page.cloud != true -%}
The following screenshot shows the execution of a transaction flagged with **High Contention**:
@@ -84,8 +84,7 @@ To display this view, click **Insights** in the left-hand navigation of the Clou
The rows in this page are populated from the [`crdb_internal.cluster_execution_insights`]({{ link_prefix }}crdb-internal.html) table.
- The results displayed on the **Statement Executions** view will be available as long as the number of rows in each node is less than the [`sql.insights.execution_insights_capacity` cluster setting]({{ link_prefix }}cluster-settings.html#setting-sql-insights-execution-insights-capacity).
-- The default tracing behavior enables captures a small percent of transactions so not all [contention]({{ link_prefix }}performance-best-practices-overview.html#transaction-contention) events will be recorded. When investigating query latency, you can set the [`sql.trace.txn.enable_threshold` cluster setting]({{ link_prefix }}cluster-settings.html#setting-sql-trace-txn-enable-threshold) to always capture contention events.
-
+- {% include {{ page.version.version }}/performance/sql-trace-txn-enable-threshold.md %}
{{site.data.alerts.end}}
{% if page.cloud != true -%}
diff --git a/_includes/v23.1/ui/sessions.md b/_includes/v23.1/ui/sessions.md
index 80577b00644..402e42d0ada 100644
--- a/_includes/v23.1/ui/sessions.md
+++ b/_includes/v23.1/ui/sessions.md
@@ -35,7 +35,7 @@ Actions | Options to cancel the active statement and cancel the session. These r
To view details of a session, click a **Session Start Time (UTC)** to display session details.
-## Session details
+## Session Details
If a session is idle, the **Transaction** and **Most Recent Statement** panels will display **No Active [Transaction | Statement]**.
diff --git a/advisories/a102375.md b/advisories/a102375.md
new file mode 100644
index 00000000000..96013618531
--- /dev/null
+++ b/advisories/a102375.md
@@ -0,0 +1,49 @@
+---
+title: Technical Advisory 102375
+advisory: A-102375
+summary: Some customers may experience spurious privilege errors when trying to run queries due to a bug in the query cache.
+toc: true
+affected_versions: v22.1.19 and v22.2.8
+advisory_date: 2023-05-11
+docs_area: releases
+---
+
+Publication date: {{ page.advisory_date | date: "%B %e, %Y" }}
+
+## Description
+
+In CockroachDB versions v22.1.19 and v22.2.8, some customers may experience spurious [privilege](../v22.2/security-reference/authorization.html#privileges) errors when trying to run queries due to a bug in the query cache. This can happen if two or more databases exist on the same cluster with tables that have the same name and at least one [foreign key reference](../v22.2/foreign-key.html). If identical queries are used to query the tables in the two different databases by users with different permissions, they may experience errors due to insufficient privileges.
+
+## Statement
+
+This is resolved in CockroachDB by PR [#102405](https://github.com/cockroachdb/cockroach/issues/102405) which ensures that privilege checks happen after staleness checks when attempting to use the query cache.
+
+The fix has been applied to the maintenance release of CockroachDB [v22.2.9](../releases/v22.2.html#v22-2-9).
+
+This fix will be applied to the maintenance release of CockroachDB v22.1.20.
+
+This public issue is tracked by [#102375](https://github.com/cockroachdb/cockroach/issues/102375).
+
+## Mitigation
+
+Users of CockroachDB v22.1.19 and v22.2.8 who experience spurious [privilege](../v22.2/security-reference/authorization.html#privileges) errors with the query cache enabled are encouraged to upgrade to v22.1.20, v22.2.9, or a later version.
+
+If an upgrade is not possible, the issue can be avoided by updating the SQL queries to qualify table names with the database name so there is no collision in the query cache. For example, `SELECT * FROM table_name` can be rewritten using [partially qualified](../v22.2/sql-name-resolution.html#lookup-with-partially-qualified-names) or [fully qualified](../v22.2/sql-name-resolution.html#lookup-with-fully-qualified-names) names as follows:
+
+- `SELECT * FROM database_name.table_name`
+- `SELECT * FROM database_name.schema_name.table_name`
+
+Another option, if an upgrade is not possible, is to disable the query cache with the following command:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SET CLUSTER SETTING sql.query_cache.enabled = false;
+~~~
+
+Disabling the query cache may degrade the performance of the cluster, however.
+
+## Impact
+
+Some customers running identical queries with different roles to access tables with the same name in different databases could experience spurious [privilege](../v22.2/security-reference/authorization.html#privileges) errors on CockroachDB v22.1.19 and v22.2.8 with the query cache enabled.
+
+Please reach out to the [support team](https://support.cockroachlabs.com) if more information or assistance is needed.
diff --git a/cockroachcloud/egress-perimeter-controls.md b/cockroachcloud/egress-perimeter-controls.md
index 43e312d58c3..25583488925 100644
--- a/cockroachcloud/egress-perimeter-controls.md
+++ b/cockroachcloud/egress-perimeter-controls.md
@@ -7,10 +7,6 @@ docs_area: security
cloud: true
---
-{{site.data.alerts.callout_info}}
-{% include_cached feature-phases/limited-access.md %}
-{{site.data.alerts.end}}
-
This page describes how Egress Perimeter Controls can enhance the security of {{ site.data.products.dedicated }} clusters, and gives an overview of how to manage a cluster's egress rules.
## Why use Egress Perimeter Controls
diff --git a/jekyll-algolia-dev/lib/jekyll/algolia/indexer.rb b/jekyll-algolia-dev/lib/jekyll/algolia/indexer.rb
index 500b4711a1b..b22c608e7cd 100644
--- a/jekyll-algolia-dev/lib/jekyll/algolia/indexer.rb
+++ b/jekyll-algolia-dev/lib/jekyll/algolia/indexer.rb
@@ -339,6 +339,12 @@ def self.update_synonyms
synonyms: ['schema conversion tool', 'sct']
}, false)
+ index.save_synonym('full text search', {
+ objectID: 'full text search',
+ type: 'synonym',
+ synonyms: ['full text search', 'fts']
+ }, false)
+
return
end
diff --git a/v22.2/cockroachdb-feature-availability.md b/v22.2/cockroachdb-feature-availability.md
index f4e1f2a6c13..b3bbaf00450 100644
--- a/v22.2/cockroachdb-feature-availability.md
+++ b/v22.2/cockroachdb-feature-availability.md
@@ -14,7 +14,7 @@ This page outlines _feature availability_, which is separate from Cockroach Labs
## Feature availability phases
-Phase | Definition | Accessibility
+Phase | Definition | Accessibility
----------------------------------------------+------------+-------------
Private preview | Feature is not production-ready and will not be publicly documented. | Invite-only
[Limited access](#features-in-limited-access) | Feature is production-ready but not available widely because of known limitations and/or because capabilities may change or be added based on feedback. | Opt-in Contact your Cockroach Labs account team.
@@ -31,18 +31,6 @@ General availability (GA) | Feature is production-ready and
{{ site.data.products.dedicated }} users can use the [Cloud API](../cockroachcloud/cloud-api.html) to configure [log export](../cockroachcloud/export-logs.html) to [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) or [GCP Cloud Logging](https://cloud.google.com/logging). Once the export is configured, logs will flow from all nodes in all regions of your {{ site.data.products.dedicated }} cluster to your chosen cloud log sink. You can configure log export to redact sensitive log entries, limit log output by severity, and send log entries to specific log group targets by log channel, among others.
-### Customer-Managed Encryption Keys (CMEK) on {{ site.data.products.dedicated }}
-
-[Customer-Managed Encryption Keys (CMEK)](../cockroachcloud/cmek.html) allow you to protect data at rest in a {{ site.data.products.dedicated }} cluster using a cryptographic key that is entirely within your control, hosted in a supported key-management system (KMS) platform.
-
-### Egress perimeter controls for {{ site.data.products.dedicated }}
-
-[Egress Perimeter Controls](../cockroachcloud/egress-perimeter-controls.html) can enhance the security of {{ site.data.products.dedicated }} clusters by enabling cluster administrators to restrict egress to a list of specified external destinations. This adds a strong layer of protection against malicious or accidental data exfiltration.
-
-### Private {{ site.data.products.dedicated }} clusters
-
-Limiting access to a CockroachDB cluster's nodes over the public internet is an important security practice and is also a compliance requirement for many organizations. [{{ site.data.products.dedicated }} private clusters](../cockroachcloud/private-clusters.html) allow organizations to meet this objective. A private {{ site.data.products.dedicated }} cluster's nodes have no public IP addresses, and egress traffic moves over private subnets and through a highly-available NAT gateway that is unique to the cluster.
-
### Export Cloud Organization audit logs (Cloud API)
{{ site.data.products.db }} captures audit logs when many types of events occur, such as when a cluster is created or when a user is added to or removed from an organization. Any user in an organization with an admin-level service account can [export these audit logs](../cockroachcloud/cloud-org-audit-logs.html) using the [`auditlogevents` endpoint](../cockroachcloud/cloud-api.html#cloud-audit-logs) of the [Cloud API](../cockroachcloud/cloud-api.html).
@@ -144,7 +132,7 @@ CockroachDB supports [altering the column types](alter-table.html#alter-column-d
[Temporary tables](temporary-tables.html), [temporary views](views.html#temporary-views), and [temporary sequences](create-sequence.html#temporary-sequences) are in preview in CockroachDB. If you create too many temporary objects in a session, the performance of DDL operations will degrade. Performance limitations could persist long after creating the temporary objects. For more details, see [cockroachdb/cockroach#46260](https://github.com/cockroachdb/cockroach/issues/46260).
-To enable temporary objects, set the `experimental_enable_temp_tables` [session variable](show-vars.html) to `on`.
+To enable temporary objects, set the `experimental_enable_temp_tables` [session variable](show-vars.html) to `on`.
### Password authentication without TLS
@@ -187,7 +175,7 @@ Use a [webhook sink](changefeed-sinks.html#webhook-sink) to deliver changefeed m
### Change data capture transformations
-[Change data capture transformations](cdc-transformations.html) allow you to define the change data emitted to your sink when you create a changefeed. The expression syntax provides a way to select columns and apply filters to further restrict or transform the data in your [changefeed messages](changefeed-messages.html).
+[Change data capture transformations](cdc-transformations.html) allow you to define the change data emitted to your sink when you create a changefeed. The expression syntax provides a way to select columns and apply filters to further restrict or transform the data in your [changefeed messages](changefeed-messages.html).
### External connections
diff --git a/v23.1/alter-range.md b/v23.1/alter-range.md
index b64d6c23654..5f770cb3f07 100644
--- a/v23.1/alter-range.md
+++ b/v23.1/alter-range.md
@@ -115,7 +115,7 @@ SELECT store_id FROM crdb_internal.kv_store_status;
#### Find range ID and leaseholder information
-To use `ALTER RANGE ... RELOCATE`, you need to know how to find the range ID, leaseholder, and other information for a [table](show-ranges.html#show-ranges-for-a-table-primary-index), [index](show-ranges.html#show-ranges-for-an-index), or [database](show-ranges.html#show-ranges-for-a-database). You can find this information using the [`SHOW RANGES`](show-ranges.html) statement.
+To use `ALTER RANGE ... RELOCATE`, you need to know how to find the range ID, leaseholder, and other information for a [table](show-ranges.html#show-ranges-for-a-table), [index](show-ranges.html#show-ranges-for-an-index), or [database](show-ranges.html#show-ranges-for-a-database). You can find this information using the [`SHOW RANGES`](show-ranges.html) statement.
For example, to get all range IDs, leaseholder store IDs, and leaseholder localities for the [`movr.users`](movr.html) table, use the following query:
diff --git a/v23.1/architecture/storage-layer.md b/v23.1/architecture/storage-layer.md
index a4510039a66..5768dcf5104 100644
--- a/v23.1/architecture/storage-layer.md
+++ b/v23.1/architecture/storage-layer.md
@@ -154,8 +154,8 @@ CockroachDB regularly garbage collects MVCC values to reduce the size of data st
Garbage collection can only run on MVCC values which are not covered by a *protected timestamp*. The protected timestamp subsystem exists to ensure the safety of operations that rely on historical data, such as:
-- [Backups](../backup.html)
-- [Changefeeds](../change-data-capture-overview.html)
+- [Backups](../create-schedule-for-backup.html#protected-timestamps-and-scheduled-backups)
+- [Changefeeds](../changefeed-messages.html#garbage-collection-and-changefeeds)
Protected timestamps ensure the safety of historical data while also enabling shorter [GC TTLs](../configure-replication-zones.html#gc-ttlseconds). A shorter GC TTL means that fewer previous MVCC values are kept around. This can help lower query execution costs for workloads which update rows frequently throughout the day, since [the SQL layer](sql-layer.html) has to scan over previous MVCC values to find the current value of a row.
@@ -165,6 +165,8 @@ Protected timestamps work by creating *protection records*, which are stored in
Upon successful creation of a protection record, the MVCC values for the specified data at timestamps less than or equal to the protected timestamp will not be garbage collected. When the job that created the protection record finishes its work, it removes the record, allowing the garbage collector to run on the formerly protected values.
+For further detail on protected timestamps, see the Cockroach Labs Blog [Protected Timestamps: For a future with less garbage](https://www.cockroachlabs.com/blog/protected-timestamps-for-less-garbage/).
+
## Interactions with other layers
### Storage and replication layers
diff --git a/v23.1/architecture/transaction-layer.md b/v23.1/architecture/transaction-layer.md
index c997cdda27d..ae6a3885596 100644
--- a/v23.1/architecture/transaction-layer.md
+++ b/v23.1/architecture/transaction-layer.md
@@ -105,7 +105,7 @@ Whenever a write occurs, its timestamp is checked against the timestamp cache. I
### Closed timestamps
-Each CockroachDB range tracks a property called its _closed timestamp_, which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a transaction retry error (see [read refreshing](#read-refreshing)).
+Each CockroachDB range tracks a property called its _closed timestamp_, which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a [transaction retry error](../transaction-retry-error-reference.html) (see [read refreshing](#read-refreshing)).
In other words, a closed timestamp is a promise by the range's [leaseholder](replication-layer.html#leases) to its follower replicas that it will not accept writes below that timestamp. Generally speaking, the leaseholder continuously closes timestamps a few seconds in the past.
@@ -190,7 +190,7 @@ For more details about how the concurrency manager works with the latch manager
#### Concurrency manager
- The concurrency manager is a structure that sequences incoming requests and provides isolation between the transactions that issued those requests that intend to perform conflicting operations. During sequencing, conflicts are discovered and any found are resolved through a combination of passive queuing and active pushing. Once a request has been sequenced, it is free to evaluate without concerns of conflicting with other in-flight requests due to the isolation provided by the manager. This isolation is guaranteed for the lifetime of the request but terminates once the request completes.
+The concurrency manager is a structure that sequences incoming requests and provides isolation between the transactions that issued those requests that intend to perform conflicting operations. During sequencing, conflicts are discovered and any found are resolved through a combination of passive queuing and active pushing. Once a request has been sequenced, it is free to evaluate without concerns of conflicting with other in-flight requests due to the isolation provided by the manager. This isolation is guaranteed for the lifetime of the request but terminates once the request completes.
Each request in a transaction should be isolated from other requests, both during the request's lifetime and after the request has completed (assuming it acquired locks), but within the surrounding transaction's lifetime.
@@ -263,7 +263,7 @@ To make this simpler to understand, we'll call the first transaction `TxnA` and
CockroachDB proceeds through the following steps:
-1. If the transaction has an explicit priority set (i.e., `HIGH` or `LOW`), the transaction with the lower priority is aborted (in the write/write case) or has its timestamp pushed (in the write/read case).
+1. If the transaction has an explicit priority set (i.e., `HIGH` or `LOW`), the transaction with the lower priority is aborted (in the write/write case) or has its timestamp [pushed](#timestamp-cache) (in the write/read case).
1. If the encountered transaction is expired, it's `ABORTED` and conflict resolution succeeds. We consider a write intent expired if:
- It doesn't have a transaction record and its timestamp is outside of the transaction liveness threshold.
@@ -297,8 +297,9 @@ If there is a deadlock between transactions (i.e., they're each blocked by each
### Read refreshing
-Whenever a transaction's timestamp has been pushed, additional checks are required before allowing it to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation. The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the [timestamp cache](#timestamp-cache), or they perform the check whenever they encounter a [`ReadWithinUncertaintyIntervalError`](../transaction-retry-error-reference.html#readwithinuncertaintyinterval) immediately, before continuing).
-If the refreshing is unsuccessful, then the transaction must be retried at the pushed timestamp.
+Whenever a transaction's timestamp has been pushed, additional checks are required before allowing it to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation.
+
+The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the [timestamp cache](#timestamp-cache), or they perform the check whenever they encounter a [`ReadWithinUncertaintyIntervalError`](../transaction-retry-error-reference.html#readwithinuncertaintyintervalerror) immediately, before continuing). If the refreshing is unsuccessful (also known as *read invalidation*), then the transaction must be retried at the pushed timestamp.
### Transaction pipelining
@@ -398,7 +399,7 @@ Additionally, when other transactions encounter a transaction in `STAGING` state
## Non-blocking transactions
- CockroachDB supports low-latency, global reads of read-mostly data in [multi-region clusters](../multiregion-overview.html) using _non-blocking transactions_: an extension of the [standard read-write transaction protocol](#overview) that allows a writing transaction to perform [locking](#concurrency-control) in a manner such that contending reads by other transactions can avoid waiting on its locks.
+CockroachDB supports low-latency, global reads of read-mostly data in [multi-region clusters](../multiregion-overview.html) using _non-blocking transactions_: an extension of the [standard read-write transaction protocol](#overview) that allows a writing transaction to perform [locking](#concurrency-control) in a manner such that contending reads by other transactions can avoid waiting on its locks.
The non-blocking transaction protocol and replication scheme differ from standard read-write transactions as follows:
diff --git a/v23.1/backup-and-restore-monitoring.md b/v23.1/backup-and-restore-monitoring.md
index 2cd15f4ceb2..7cb18a26bc5 100644
--- a/v23.1/backup-and-restore-monitoring.md
+++ b/v23.1/backup-and-restore-monitoring.md
@@ -32,29 +32,37 @@ See the [Monitor CockroachDB with Prometheus](monitor-cockroachdb-with-prometheu
We recommend the following guidelines:
-- Use the `schedules_backup_last_completed_time` metric to monitor the specific backup job or jobs you would use to recover from a disaster.
-- Configure alerting on the `schedules_backup_last_completed_time` metric to watch for cases where the timestamp has not moved forward as expected.
+- Use the `schedules.BACKUP.last_completed_time` metric to monitor the specific backup job or jobs you would use to recover from a disaster.
+- Configure alerting on the `schedules.BACKUP.last_completed_time` metric to watch for cases where the timestamp has not moved forward as expected.
Metric | Description
-------+-------------
-`schedules_backup_succeeded` | The number of scheduled backup jobs that have succeeded.
-`schedules_backup_started` | The number of scheduled backup jobs that have started.
-`schedules_backup_last_completed_time` | The Unix timestamp of the most recently completed scheduled backup specified as maintaining this metric. **Note:** This metric only updates if the schedule was created with the [`updates_cluster_last_backup_time_metric` option](create-schedule-for-backup.html#schedule-options).
-`schedules_backup_failed` | The number of scheduled backup jobs that have failed. **Note:** A stuck scheduled job will not increment this metric.
-`schedules_round_reschedule_wait` | The number of schedules that were rescheduled due to a currently running job. A value greater than 0 indicates that a previous backup was still running when a new scheduled backup was supposed to start. This corresponds to the [`on_previous_running=wait`](create-schedule-for-backup.html#on-previous-running-option) schedule option.
-`schedules_round_reschedule_skip` | The number of schedules that were skipped due to a currently running job. A value greater than 0 indicates that a previous backup was still running when a new scheduled backup was supposed to start. This corresponds to the [`on_previous_running=skip`](create-schedule-for-backup.html#on-previous-running-option) schedule option.
-`jobs_backup_currently_running` | The number of backup jobs currently running in `Resume` or `OnFailOrCancel` state.
-`jobs_backup_fail_or_cancel_retry_error` | The number of backup jobs that failed with a retryable error on their failure or cancelation process.
-`jobs_backup_fail_or_cancel_completed` | The number of backup jobs that successfully completed their failure or cancelation process.
-`jobs_backup_fail_or_cancel_failed` | The number of backup jobs that failed with a non-retryable error on their failure or cancelation process.
-`jobs_backup_resume_failed` | The number of backup jobs that failed with a non-retryable error.
-`jobs_backup_resume_retry_error` | The number of backup jobs that failed with a retryable error.
-`jobs_restore_resume_retry_error` | The number of restore jobs that failed with a retryable error.
-`jobs_restore_resume_completed` | The number of restore jobs that successfully resumed to completion.
-`jobs_restore_resume_failed` | The number of restore jobs that failed with a non-retryable error.
-`jobs_restore_fail_or_cancel_failed` | The number of restore jobs that failed with a non-retriable error on their failure or cancelation process.
-`jobs_restore_fail_or_cancel_retry_error` | The number of restore jobs that failed with a retryable error on their failure or cancelation process.
-`jobs_restore_currently_running` | The number of restore jobs currently running in `Resume` or `OnFailOrCancel` state.
+`schedules.BACKUP.failed` | The number of scheduled backup jobs that have failed. **Note:** A stuck scheduled job will not increment this metric.
+`schedules.BACKUP.last_completed_time` | The Unix timestamp of the most recently completed scheduled backup specified as maintaining this metric. **Note:** This metric only updates if the schedule was created with the [`updates_cluster_last_backup_time_metric` option](create-schedule-for-backup.html#schedule-options).
+New in v23.1: `schedules.BACKUP.protected_age_sec` | The age of the oldest [protected timestamp record](create-schedule-for-backup.html#protected-timestamps-and-scheduled-backups) protected by backup schedules.
+New in v23.1: `schedules.BACKUP.protected_record_count` | The number of [protected timestamp records](create-schedule-for-backup.html#protected-timestamps-and-scheduled-backups) held by backup schedules.
+`schedules.BACKUP.started` | The number of scheduled backup jobs that have started.
+`schedules.BACKUP.succeeded` | The number of scheduled backup jobs that have succeeded.
+`schedules.round.reschedule_skip` | The number of schedules that were skipped due to a currently running job. A value greater than 0 indicates that a previous backup was still running when a new scheduled backup was supposed to start. This corresponds to the [`on_previous_running=skip`](create-schedule-for-backup.html#on-previous-running-option) schedule option.
+`schedules.round.reschedule_wait` | The number of schedules that were rescheduled due to a currently running job. A value greater than 0 indicates that a previous backup was still running when a new scheduled backup was supposed to start. This corresponds to the [`on_previous_running=wait`](create-schedule-for-backup.html#on-previous-running-option) schedule option.
+New in v23.1: `jobs.backup.currently_paused` | The number of backup jobs currently considered [paused](pause-job.html).
+`jobs.backup.currently_running` | The number of backup jobs currently running in `Resume` or `OnFailOrCancel` state.
+`jobs.backup.fail_or_cancel_retry_error` | The number of backup jobs that failed with a retryable error on their failure or cancelation process.
+`jobs.backup.fail_or_cancel_completed` | The number of backup jobs that successfully completed their failure or cancelation process.
+`jobs.backup.fail_or_cancel_failed` | The number of backup jobs that failed with a non-retryable error on their failure or cancelation process.
+New in v23.1: `jobs.backup.protected_age_sec` | The age of the oldest [protected timestamp record](create-schedule-for-backup.html#protected-timestamps-and-scheduled-backups) protected by backup jobs.
+New in v23.1: `jobs.backup.protected_record_count` | The number of [protected timestamp records](create-schedule-for-backup.html#protected-timestamps-and-scheduled-backups) held by backup jobs.
+`jobs.backup.resume_failed` | The number of backup jobs that failed with a non-retryable error.
+`jobs.backup.resume_retry_error` | The number of backup jobs that failed with a retryable error.
+New in v23.1: `jobs.restore.currently_paused` | The number of restore jobs currently considered [paused](pause-job.html).
+`jobs.restore.currently_running` | The number of restore jobs currently running in `Resume` or `OnFailOrCancel` state.
+`jobs.restore.fail_or_cancel_failed` | The number of restore jobs that failed with a non-retriable error on their failure or cancelation process.
+`jobs.restore.fail_or_cancel_retry_error` | The number of restore jobs that failed with a retryable error on their failure or cancelation process.
+New in v23.1: `jobs.restore.protected_age_sec` | The age of the oldest [protected timestamp record](architecture/storage-layer.html#protected-timestamps) protected by restore jobs.
+New in v23.1: `jobs.restore.protected_record_count` | The number of [protected timestamp records](architecture/storage-layer.html#protected-timestamps) held by restore jobs.
+`jobs.restore.resume_completed` | The number of restore jobs that successfully resumed to completion.
+`jobs.restore.resume_failed` | The number of restore jobs that failed with a non-retryable error.
+`jobs.restore.resume_retry_error` | The number of restore jobs that failed with a retryable error.
## Datadog integration
@@ -67,10 +75,10 @@ To use the Datadog integration with your **{{ site.data.products.dedicated }}**
Metric | Description
-------+-------------
-`schedules_backup_succeeded` | The number of scheduled backup jobs that have succeeded.
-`schedules_backup_started` | The number of scheduled backup jobs that have started.
-`schedules_backup_last_completed_time` | The Unix timestamp of the most recently completed backup by a schedule specified as maintaining this metric.
-`schedules_backup_failed` | The number of scheduled backup jobs that have failed.
+`schedules.BACKUP.succeeded` | The number of scheduled backup jobs that have succeeded.
+`schedules.BACKUP.started` | The number of scheduled backup jobs that have started.
+`schedules.BACKUP.last_completed_time` | The Unix timestamp of the most recently completed backup by a schedule specified as maintaining this metric.
+`schedules.BACKUP.failed` | The number of scheduled backup jobs that have failed.
## See also
diff --git a/v23.1/build-a-go-app-with-cockroachdb-upperdb.md b/v23.1/build-a-go-app-with-cockroachdb-upperdb.md
index da67a01d9b5..8dc1272ee01 100644
--- a/v23.1/build-a-go-app-with-cockroachdb-upperdb.md
+++ b/v23.1/build-a-go-app-with-cockroachdb-upperdb.md
@@ -48,7 +48,7 @@ The sample code shown below uses upper/db to map Go-specific objects to SQL oper
{% include {{ page.version.version }}/app/upperdb-basic-sample/main.go %}
~~~
-Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transactions.html#client-side-intervention) when transaction errors are thrown. As a result, this function forces a transaction retry.
+Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transaction-retry-error-reference.html#client-side-retry-handling) when transaction errors are thrown. As a result, this function forces a transaction retry.
To run the code, copy the sample above, or download it directly.
@@ -85,7 +85,7 @@ The sample code shown below uses upper/db to map Go-specific objects to SQL oper
{% include {{ page.version.version }}/app/insecure/upperdb-basic-sample/main.go %}
~~~
-Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transactions.html#client-side-intervention) when transaction errors are thrown. As a result, this function forces a transaction retry.
+Note that the sample code also includes a function that simulates a transaction error (`crdbForceRetry()`). Upper/db's CockroachDB adapter [automatically retries transactions](transaction-retry-error-reference.html#client-side-retry-handling) when transaction errors are thrown. As a result, this function forces a transaction retry.
Copy the code or download it directly.
diff --git a/v23.1/bulk-delete-data.md b/v23.1/bulk-delete-data.md
index 815407e9f6e..e6260660fbd 100644
--- a/v23.1/bulk-delete-data.md
+++ b/v23.1/bulk-delete-data.md
@@ -84,7 +84,7 @@ If you cannot index the column that identifies the unwanted rows, we recommend d
1. Execute a [`SELECT` query](selection-queries.html) that returns the primary key values for the rows that you want to delete. When writing the `SELECT` query:
- Use a `WHERE` clause that filters on the column identifying the rows.
- - Add an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) to the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). This helps to reduce [transaction contention](transactions.html#transaction-contention).
+ - Add an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) to the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). This helps to reduce [transaction contention](performance-best-practices-overview.html#transaction-contention).
- Use a [`LIMIT`](limit-offset.html) clause to limit the number of rows queried to a subset of the rows that you want to delete. To determine the optimal `SELECT` batch size, try out different sizes (10,000 rows, 100,000 rows, 1,000,000 rows, etc.), and monitor the change in performance. Note that this `SELECT` batch size can be much larger than the batch size of rows that are deleted in the subsequent `DELETE` query.
- To ensure that rows are efficiently scanned in the subsequent `DELETE` query, include an [`ORDER BY`](order-by.html) clause on the primary key.
diff --git a/v23.1/bulk-update-data.md b/v23.1/bulk-update-data.md
index 7e015b8510f..74abff27933 100644
--- a/v23.1/bulk-update-data.md
+++ b/v23.1/bulk-update-data.md
@@ -34,7 +34,7 @@ Before reading this page, do the following:
- Use a `WHERE` clause to filter on columns that identify the rows that you want to update. This clause should also filter out the rows that have been updated by previous iterations of the nested `UPDATE` loop:
- For optimal performance, the first condition of the filter should evaluate the last primary key value returned by the last `UPDATE` query that was executed. This narrows each `SELECT` query's scan to the fewest rows possible, and preserves the performance of the row updates over time.
- Another condition of the filter should evaluate column values persisted to the database that signal whether or not a row has been updated. This prevents rows from being updated more than once, in the event that the application or script crashes and needs to be restarted. If there is no way to distinguish between an updated row and a row that has not yet been updated, you might need to [add a new column to the table](alter-table.html#add-column) (e.g., `ALTER TABLE ... ADD COLUMN updated BOOL;`).
- - Add an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) to the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). This helps to reduce [transaction contention](transactions.html#transaction-contention).
+ - Add an [`AS OF SYSTEM TIME` clause](as-of-system-time.html) to the end of the selection subquery, or run the selection query in a separate, read-only transaction with [`SET TRANSACTION AS OF SYSTEM TIME`](as-of-system-time.html#use-as-of-system-time-in-transactions). This helps to reduce [transaction contention](performance-best-practices-overview.html#transaction-contention).
- Use a [`LIMIT`](limit-offset.html) clause to limit the number of rows queried to a subset of the rows that you want to update. To determine the optimal `SELECT` batch size, try out different sizes (10,000 rows, 20,000 rows, etc.), and monitor the change in performance. Note that this `SELECT` batch size can be much larger than the batch size of rows that are updated in the subsequent `UPDATE` query.
- To ensure that rows are efficiently scanned in the subsequent `UPDATE` query, include an [`ORDER BY`](order-by.html) clause on the primary key.
diff --git a/v23.1/cdc-queries.md b/v23.1/cdc-queries.md
index de251a2c02a..e636b4df3ee 100644
--- a/v23.1/cdc-queries.md
+++ b/v23.1/cdc-queries.md
@@ -167,6 +167,10 @@ CREATE CHANGEFEED INTO sink AS SELECT * FROM table WHERE crdb_region = 'europe-w
For more detail on targeting `REGIONAL BY ROW` tables with changefeeds, see [Changefeeds in Multi-Region Deployments](changefeeds-in-multi-region-deployments.html).
+{{site.data.alerts.callout_success}}
+If you are running changefeeds from a [multi-region](multiregion-overview.html) cluster, you may want to define which nodes take part in running the changefeed job. You can use the [`execution_locality` option](changefeeds-in-multi-region-deployments.html#run-a-changefeed-job-by-locality) with key-value pairs to specify the [locality designations](cockroach-start.html#locality) nodes must meet.
+{{site.data.alerts.end}}
+
### Stabilize the changefeed message schema
As changefeed messages emit from the database, message formats can vary as tables experience [schema changes](changefeed-messages.html#schema-changes). You can select columns with [typecasting](data-types.html#data-type-conversions-and-casts) to prevent message fields from changing during a changefeed's lifecycle:
diff --git a/v23.1/change-data-capture-overview.md b/v23.1/change-data-capture-overview.md
index 7865bb790e2..cfa0068af78 100644
--- a/v23.1/change-data-capture-overview.md
+++ b/v23.1/change-data-capture-overview.md
@@ -18,8 +18,8 @@ The main feature of CDC is the changefeed, which targets an allowlist of tables,
--------------------------------------------------|-----------------------------------------------------------------|
| Useful for prototyping or quick testing. | Recommended for production use. |
| Available in all products. | Available in {{ site.data.products.dedicated }} or with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html) in {{ site.data.products.core }} or {{ site.data.products.serverless }}. |
-| Streams indefinitely to the SQL client until underlying SQL connection is closed. | Maintains connection to configured sink ([Kafka](changefeed-sinks.html#kafka), [Google Cloud Pub/Sub](changefeed-sinks.html#google-cloud-pub-sub), [Amazon S3](changefeed-sinks.html#amazon-s3), [Google Cloud Storage](changefeed-sinks.html#google-cloud-storage), [Azure Storage](changefeed-sinks.html#azure-blob-storage), [HTTP](changefeed-sinks.html#http), [Webhook](changefeed-sinks.html#webhook-sink)). |
-| Create with [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). | Create with [`CREATE CHANGEFEED`](create-changefeed.html).JSONB
column, see the JSON tutorial.{{site.data.alerts.end}}
@@ -60,7 +61,7 @@ This lets you search based on subcomponents.
### Creation
-You can use GIN indexes to improve the performance of queries using `JSONB` or `ARRAY` columns. You can create them:
+You can use GIN indexes to improve the performance of queries using [`JSONB`](jsonb.html), [`ARRAY`](array.html), [`TSVECTOR`](tsvector.html) columns (for [full-text searches](full-text-search.html)), or [`STRING`](string.html) (for [fuzzy searches using trigrams](trigram-indexes.html)). You can create them:
- Using the PostgreSQL-compatible syntax [`CREATE INDEX ... USING GIN`](create-index.html):
@@ -68,18 +69,28 @@ You can use GIN indexes to improve the performance of queries using `JSONB` or `
CREATE INDEX {optional name} ON {table} USING GIN ({column});
~~~
- You can also specify the `jsonb_ops` or `array_ops` opclass (for `JSONB` and `ARRAY` columns, respectively) using the syntax:
+ Also specify an opclass when [creating a trigram index](trigram-indexes.html#creation):
~~~ sql
CREATE INDEX {optional name} ON {table} USING GIN ({column} {opclass});
~~~
+ {{site.data.alerts.callout_success}}
+ You can also use the preceding syntax to specify the `jsonb_ops` or `array_ops` opclass (for `JSONB` and `ARRAY` columns, respectively).
+ {{site.data.alerts.end}}
+
- While creating the table, using the syntax [`CREATE INVERTED INDEX`](create-table.html#create-a-table-with-secondary-and-gin-indexes):
~~~ sql
CREATE INVERTED INDEX {optional name} ON {table} ({column});
~~~
+ Also specify an opclass when [creating a trigram index](trigram-indexes.html#creation):
+
+ ~~~ sql
+ CREATE INVERTED INDEX {optional name} ON {table} ({column} {opclass});
+ ~~~
+
### Selection
If a query contains a filter against an indexed `JSONB` or `ARRAY` column that uses any of the [supported operators](#comparisons), the GIN index is added to the set of index candidates.
@@ -193,7 +204,7 @@ CREATE TABLE users (
## Examples
-### Create a table with GIN index on a JSONB column
+### Create a table with GIN index on a `JSONB` column
In this example, let's create a table with a `JSONB` column and a GIN index:
@@ -269,7 +280,7 @@ Now, run a query that filters on the `JSONB` column:
(2 rows)
~~~
-### Add a GIN index to a table with an array column
+### Add a GIN index to a table with an `ARRAY` column
In this example, let's create a table with an `ARRAY` column first, and add the GIN index later:
@@ -335,7 +346,7 @@ Now, let’s add a GIN index to the table and run a query that filters on the `A
(2 rows)
~~~
-### Create a table with a partial GIN index on a JSONB column
+### Create a table with a partial GIN index on a `JSONB` column
In the same `users` table from [Create a table with GIN index on a JSONB column](#create-a-table-with-gin-index-on-a-jsonb-column), create a partial GIN index for online users.
@@ -369,10 +380,14 @@ SELECT * FROM users@idx_online_users WHERE user_profile->'online' = 'true' AND u
(1 row)
~~~
-### Create a trigram index on a STRING column
+### Create a trigram index on a `STRING` column
For an example showing how to create a trigram index on a [`STRING`](string.html) column, see [Trigram Indexes](trigram-indexes.html#examples).
+### Create a full-text index on a `TSVECTOR` column
+
+For an example showing how to create a full-text index on a [`TSVECTOR`](tsvector.html) column, see [Full-Text Search](full-text-search.html#examples).
+
### Inverted join examples
{% include {{ page.version.version }}/sql/inverted-joins.md %}
diff --git a/v23.1/jsonb.md b/v23.1/jsonb.md
index 80aeb78ba98..80a38d131d3 100644
--- a/v23.1/jsonb.md
+++ b/v23.1/jsonb.md
@@ -460,7 +460,7 @@ SELECT '100.50'::JSONB::DECIMAL;
(1 row)
~~~
-You use the [`parse_timestamp` function](functions-and-operators.html) to parse strings in `TIMESTAMP` format.
+You can use the [`parse_timestamp` function](functions-and-operators.html) to parse strings in `TIMESTAMP` format.
{% include_cached copy-clipboard.html %}
~~~ sql
diff --git a/v23.1/logging-use-cases.md b/v23.1/logging-use-cases.md
index 4272c75969f..d5974b8eb77 100644
--- a/v23.1/logging-use-cases.md
+++ b/v23.1/logging-use-cases.md
@@ -394,7 +394,7 @@ I210323 20:02:12.095253 59168 10@util/log/event_log.go:32 ⋮ [n1,client=‹[::1
- Preceding the `=` character is the `crdb-v2` event metadata. See the [reference documentation](log-formats.html#format-crdb-v2) for details on the fields.
- `ApplicationName` shows that the events originated from an application named `bank`. You can use this field to filter the logging output by application.
-- `ErrorText` shows that this query encountered a type of [transaction retry error](transaction-retry-error-reference.html#retry_write_too_old). For details on transaction retry errors and how to resolve them, see the [Transaction retry error reference](transaction-retry-error-reference.html).
+- `ErrorText` shows that this query encountered a [type of transaction retry error](transaction-retry-error-reference.html#retry_write_too_old). For details on transaction retry errors and how to resolve them, see the [Transaction Retry Error Reference](transaction-retry-error-reference.html#actions-to-take).
- `NumRetries` shows that the transaction was retried once before succeeding.
{{site.data.alerts.callout_info}}
diff --git a/v23.1/manage-a-backup-schedule.md b/v23.1/manage-a-backup-schedule.md
index e69197d0ca6..ecd59a16796 100644
--- a/v23.1/manage-a-backup-schedule.md
+++ b/v23.1/manage-a-backup-schedule.md
@@ -40,17 +40,19 @@ Further guidance on connecting to Amazon S3, Google Cloud Storage, Azure Storage
## Set up monitoring for the backup schedule
-We recommend that you [monitor your backup schedule with Prometheus](monitoring-and-alerting.html#prometheus-endpoint), and alert when there are anomalies such as backups that have failed or no backups succeeding over a certain amount of time— at which point, you can inspect schedules by running [`SHOW SCHEDULES`](show-schedules.html).
+We recommend that you [monitor your backup schedule with Prometheus](monitoring-and-alerting.html#prometheus-endpoint), and alert when there are anomalies such as backups that have failed or no backups succeeding over a certain amount of time—at which point, you can inspect schedules by running [`SHOW SCHEDULES`](show-schedules.html).
Metrics for scheduled backups fall into two categories:
- Backup schedule-specific metrics, aggregated across all schedules:
- - `schedules_BACKUP_started`: A counter for the total number of backups started by a schedule
- - `schedules_BACKUP_succeeded`: A counter for the number of backups started by a schedule that succeeded
- - `schedules_BACKUP_failed`: A counter for the number of backups started by a schedule that failed
+ - `schedules.BACKUP.started`: The total number of backups started by a schedule.
+ - `schedules.BACKUP.succeeded`: The number of backups started by a schedule that succeeded.
+ - `schedules.BACKUP.failed`: The number of backups started by a schedule that failed.
- When `schedules_BACKUP_failed` increments, run [`SHOW SCHEDULES`](show-schedules.html) to check which schedule is affected and to inspect the error in the `status` column.
+ When `schedules.BACKUP.failed` increments, run [`SHOW SCHEDULES`](show-schedules.html) to check which schedule is affected and to inspect the error in the `status` column.
+ - {% include_cached new-in.html version="v23.1" %} `schedules.BACKUP.protected_age_sec`: The age of the oldest [protected timestamp](architecture/storage-layer.html#protected-timestamps) record protected by backup schedules.
+ - {% include_cached new-in.html version="v23.1" %} `schedules.BACKUP.protected_record_count`: The number of [protected timestamp](architecture/storage-layer.html#protected-timestamps) records held by backup schedules.
- Scheduler-specific metrics:
diff --git a/v23.1/metrics.md b/v23.1/metrics.md
index a0763850111..cf7f85eb6de 100644
--- a/v23.1/metrics.md
+++ b/v23.1/metrics.md
@@ -5,7 +5,7 @@ toc: false
docs_area: reference.metrics
---
-As part of normal operation, CockroachDB continuously records metrics that track performance, latency, usage, and many other runtime indicators. These metrics are often useful in diagnosing problems, troubleshooting performance, or planning cluster infrastructure modifications. This page documents locations where metrics are exposed for analysis, and includes the full list of available metrics in CockroachDB.
+As part of normal operation, CockroachDB continuously records metrics that track performance, latency, usage, and many other runtime indicators. These metrics are often useful in diagnosing problems, troubleshooting performance, or planning cluster infrastructure modifications. This page documents locations where metrics are exposed for analysis.
## Available metrics
diff --git a/v23.1/monitor-and-debug-changefeeds.md b/v23.1/monitor-and-debug-changefeeds.md
index 5bd4de1facf..4a15388054c 100644
--- a/v23.1/monitor-and-debug-changefeeds.md
+++ b/v23.1/monitor-and-debug-changefeeds.md
@@ -20,7 +20,7 @@ The following define the categories of non-retryable errors:
- The changefeed cannot convert the data to the specified [output format](changefeed-messages.html). For example, there are [Avro](changefeed-messages.html#avro) types that changefeeds do not support, or a [CDC query](cdc-queries.html) is using an unsupported or malformed expression.
- The terminal error happens as part of established changefeed behavior. For example, you have specified the [`schema_change_policy=stop` option](create-changefeed.html#schema-policy) and a schema change happens.
-We recommend monitoring changefeeds with [Prometheus](monitoring-and-alerting.html#prometheus-endpoint) to avoid accumulation of garbage after a changefeed encounters an error. See [Garbage collection and changefeeds](changefeed-messages.html#garbage-collection-and-changefeeds) for more detail on how changefeeds interact with protected timestamps and garbage collection. In addition, see the [Recommended changefeed metrics to track](#recommended-changefeed-metrics-to-track) section for the essential metrics to track on a changefeed.
+We recommend monitoring changefeeds with [Prometheus](monitoring-and-alerting.html#prometheus-endpoint) to avoid accumulation of garbage after a changefeed encounters an error. See [Garbage collection and changefeeds](changefeed-messages.html#garbage-collection-and-changefeeds) for more detail on how changefeeds interact with [protected timestamps](architecture/storage-layer.html#protected-timestamps) and garbage collection. In addition, see the [Recommended changefeed metrics to track](#recommended-changefeed-metrics-to-track) section for the essential metrics to track on a changefeed.
## Monitor a changefeed
@@ -59,6 +59,17 @@ By default, changefeeds will retry errors with [some exceptions](#changefeed-ret
- `changefeed.error_retries`: The total number of retryable errors encountered by all changefeeds.
- `changefeed.failures`: The total number of changefeed jobs that have failed.
+#### Protected timestamp and garbage collection monitoring
+
+[Protected timestamps](architecture/storage-layer.html#protected-timestamps) will protect changefeed data from garbage collection in particular scenarios, but if a changefeed lags too far behind, the protected changes could cause data storage issues. See [Garbage collection and changefeeds](changefeed-messages.html#garbage-collection-and-changefeeds) for detail on when changefeed data is protected from garbage collection.
+
+{% include_cached new-in.html version="v23.1" %} You can monitor changefeed jobs for [protected timestamp](architecture/storage-layer.html#protected-timestamps) usage. We recommend setting up monitoring for the following metrics:
+
+- `jobs.changefeed.protected_age_sec`: Tracks the age of the oldest [protected timestamp](architecture/storage-layer.html#protected-timestamps) record protected by changefeed jobs. We recommend monitoring if `protected_age_sec` is greater than [`gc.ttlseconds`](configure-replication-zones.html#gc-ttlseconds). As `protected_age_sec` increases, garbage accumulation increases. Garbage collection will not progress on a table, database, or cluster if the protected timestamp record is present.
+- `jobs.changefeed.currently_paused`: Tracks the number of changefeed jobs currently considered [paused](pause-job.html). Since paused changefeed jobs can accumulate garbage, it is important to monitor the number of changefeeds paused.
+- `jobs.changefeed.expired_pts_records`: Tracks the number of expired [protected timestamp](architecture/storage-layer.html#protected-timestamps) records owned by changefeed jobs. You can monitor this metric in conjunction with the [`gc_protect_expires_after` option](create-changefeed.html#gc-protect-expire).
+- `jobs.changefeed.protected_record_count`: Tracks the number of [protected timestamp](architecture/storage-layer.html#protected-timestamps) records held by changefeed jobs.
+
### Using changefeed metrics labels
{{site.data.alerts.callout_info}}
@@ -160,7 +171,7 @@ I190312 18:56:53.537686 585 vendor/github.com/Shopify/sarama/client.go:170 [kaf
{% include_cached copy-clipboard.html %}
~~~ sql
-> SHOW CHANGEFEED JOBS;
+SHOW CHANGEFEED JOBS;
~~~
~~~
diff --git a/v23.1/node-shutdown.md b/v23.1/node-shutdown.md
index 3959aba829d..e1188d541e5 100644
--- a/v23.1/node-shutdown.md
+++ b/v23.1/node-shutdown.md
@@ -345,6 +345,8 @@ If the rebalancing stalls during decommissioning, replicas that have yet to move
Do **not** terminate the node process, delete the storage volume, or remove the VM before a `decommissioning` node has [changed its membership status](#status-change) to `decommissioned`. Prematurely terminating the process will prevent the node from rebalancing all of its range replicas onto other nodes gracefully, cause transient query errors in client applications, and leave the remaining ranges under-replicated and vulnerable to loss of [quorum](architecture/replication-layer.html#overview) if another node goes down.
{{site.data.alerts.end}}
+{% include {{page.version.version}}/prod-deployment/decommission-pre-flight-checks.md %}
+
### Terminate the node process
@@ -577,6 +579,15 @@ You can use [`cockroach node drain`](cockroach-node.html) to drain a node separa
SQLSTATE: 40001
RETRY_WRITE_TOO_OLD
RETRY_SERIALIZABLE
Waiting
status.SQLSTATE: 40001
and a transaction retry error message.crdb_internal.transaction_contention_events
table indicates that your transactions have experienced contention.