Skip to content

Commit

Permalink
[BACKPORT pg15-cherrypicks] all: Bulk port from master - 86
Browse files Browse the repository at this point in the history
Summary:
 0c5102e [doc][yba] xCluster Replication update (#23417)
 Excluded: a9466df [#22325] YSQL, QueryDiagnostics: Adding a catalog view for queryDiagnostics
 b9597b3 [#23612] YSQL: Fix java unit test misuse of == for string comparison
 bb72624 [#23613] DocDB: Framework for different vector index coordinate types, SIFT 1B hnsw_tool support
 12032f3 [PLAT-12510] Add option to use UTC for cron expression backup schedule time calculation
 Excluded: 141703a [#22533] YSQL: fix setrefs for index scan
 1bb8c62 [#23543] docdb: Update tablegroup manager in RepartitionTable
 1e28b8a [#23518] Do not include full snapshot info for list snapshot schedules RPC.
 e98c383 [PLAT-15048] Fix auto-master failover local test
 f606132 [doc][yba] Backup clarification (#23611)
 e80d60f [PLAT-14973] Precheck for node agent install to verify that we have correct permissions to execute in the installer directory
 5230f5a [#23630]yugabyted: Modiying the APIs required for the new Migrate Schema Page.
 0a310d3 [PLAT-15042] Add default pitr retention period
 aa15c81 [PLAT-12435] Adding a precheck for libselinux bindings for system python3
 525672e [#23632] DocDB: Unify GetFlagInfos and remove duplicate code
 4ab5ca0 [#23601] YSQL: Fix TestPreparedStatements tests with connection manager enabled
 57a7690 [PLAT-12222][PLAT-15036][PLAT-14333] Add connection pooling support for create universe API
 3407682 [PLAT-10119]: Do not allow back-tick for DB password in YBA

Test Plan: Jenkins: rebase: pg15-cherrypicks

Reviewers: jason, tfoucher

Differential Revision: https://phorge.dev.yugabyte.com/D37578
  • Loading branch information
yugabyte-ci authored and jaki committed Aug 28, 2024
1 parent b55716f commit 356538f
Show file tree
Hide file tree
Showing 179 changed files with 4,151 additions and 1,552 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ To do a PITR on a database:
![Primary unreachable](/images/yp/create-deployments/xcluster/deploy-xcluster-tran-unreachable.png)
For more information on managing replication in YugabyteDB Anywhere, refer to [View, manage, and monitor replication](../../../../yugabyte-platform/create-deployments/async-replication-platform/#view-manage-and-monitor-replication).
For more information on managing replication in YugabyteDB Anywhere, refer to [xCluster replication](../../../../yugabyte-platform/manage-deployments/xcluster-replication).
1. Resume the application traffic on the new Primary (B).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ To set up unidirectional transactional replication using YugabyteDB Anywhere, do
This is because setting up replication requires backing up the Primary database and restoring the backup to the Standby database after cleaning up any pre-existing data on the Standby. Close any connections to the Standby database and retry the replication setup operation.
For more information on setting up replication in YugabyteDB Anywhere, refer to [Set up replication](../../../../yugabyte-platform/create-deployments/async-replication-platform/#set-up-replication).
For more information on setting up replication in YugabyteDB Anywhere, refer to [xCluster replication](../../../../yugabyte-platform/manage-deployments/xcluster-replication/).
**Adding a database to an existing replication**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ This ensures that the reads and writes are in the same region, with the expected

You can eliminate the possibility of data loss by setting up another cluster in a different region, say `us-east`, using [xCluster](../../../explore/going-beyond-sql/asynchronous-replication-ysql/#configure-bidirectional-replication).

![Active-Active Multi-Master](/images/develop/global-apps/aa-multi-master-setup.png)
![Active-Active Multi-Master](/images/architecture/replication/active-active-deployment-new.png)

The `us-east` cluster is independent of the primary cluster in `us-west` and the data is populated by **asynchronous replication**. This means that the read and write latencies of each cluster are not affected by the other, but at the same time, the data in each cluster is not immediately consistent with the other. You can use this pattern to reduce latencies for local users.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/preview/releases/yba-releases/v2.18.md
Original file line number Diff line number Diff line change
Expand Up @@ -842,7 +842,7 @@ For instructions on installing YugabyteDB Anywhere, refer to [Install YugabyteDB

### Highlights

* Improvements to [xCluster replication](../../../yugabyte-platform/create-deployments/async-replication-platform/), including:
* Improvements to [xCluster replication](../../../yugabyte-platform/manage-deployments/xcluster-replication/), including:
* Support for transactional atomicity (see [Transactional xCluster deployment](../../../deploy/multi-dc/async-replication/async-replication-transactional/))
* Automatic transfer of the source universe certificate when new nodes are added to the target universe
* Ability to delete and pause replication when the source universe goes down
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ The universe **Backups** page allows you to create new backups that start immedi

1. Navigate to the universe and select **Backups**, then click **Backup now** to open the dialog shown in the following illustration:

![Backup](/images/yp/create-backup-new-3.png)
![Backup](/images/yp/create-backup-ysql-2.20.png)

1. Select the API type for the backup.

Expand All @@ -45,9 +45,11 @@ The universe **Backups** page allows you to create new backups that start immedi

1. For YCQL backups, you can choose to back up all tables in the keyspace to which the database belongs or only certain tables. Click **Select a subset of tables** to display the **Select Tables** dialog, where you can select one or more tables to back up. Click **Confirm** when you are done.

1. Specify the period of time during which the backup is to be retained. Note that there's an option to never delete the backup.
1. For YSQL backups of universes with geo-partitioning, you can choose to back up the tablespaces. Select the **Backup tablespaces information** option.

If you don't choose to back up tablespaces, the tablespaces are not preserved and their data is backed up to the primary region.

1. If you are using YBA version prior to 2.16 to manage universes with YugabyteDB version prior to 2.16, you can optionally specify the number of threads that should be available for the backup process.
1. Specify the period of time during which the backup is to be retained. Note that there's an option to never delete the backup.

1. Click **Backup**.

Expand Down Expand Up @@ -142,7 +144,7 @@ s3://user_bucket

A backup set consists of a successful full backup, and (if incremental backups were taken) one or more consecutive successful incremental backups. The backup set can be used to restore a database at the point in time of the full and/or incremental backup, as long as the chain of good incremental backups is unbroken. Use the creation time to identify increments that occurred after a full backup.

When YBA writes a backup, the last step after all parallel tasks complete is to write a "success" file to the backup folder. The presence of this file is verification of a good backup. Any full or incremental backup that does not include a success file should not be assumed to be good, and you should use an older backup for restore instead.
When YBA writes a backup, the last step after all tasks complete is to write a "success" file to the backup folder. The presence of this file is verification of a good backup. Any full or incremental backup that does not include a success file should not be assumed to be good, and you should use an older backup for restore instead.

![Success file metadata](/images/yp/success-file-backup.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ If there is more than one node, you should consider using a network file system

You can configure Amazon S3 as your backup target, as follows:

1. Navigate to **Configs** > **Backup** > **Amazon S3**.
1. Navigate to **Integrations** > **Backup** > **Amazon S3**.

2. Click **Create S3 Backup** to access the configuration form shown in the following illustration:

Expand Down Expand Up @@ -84,7 +84,7 @@ The following S3 IAM permissions are required:
You can configure Network File System (NFS) as your backup target, as follows:
1. Navigate to **Configs > Backup > Network File System**.
1. Navigate to **Integrations > Backup > Network File System**.
2. Click **Create NFS Backup** to access the configuration form shown in the following illustration:
Expand All @@ -100,7 +100,7 @@ You can configure Network File System (NFS) as your backup target, as follows:
You can configure Google Cloud Storage (GCS) as your backup target, as follows:
1. Navigate to **Configs > Backup > Google Cloud Storage**.
1. Navigate to **Integrations > Backup > Google Cloud Storage**.
1. Click **Create GCS Backup** to access the configuration form shown in the following illustration:
Expand Down Expand Up @@ -181,7 +181,7 @@ You can configure Azure as your backup target, as follows:

1. On your YugabyteDB Anywhere instance, provide the container URL and SAS token for creating a backup, as follows:

- Navigate to **Configs** > **Backup** > **Azure Storage**.
- Navigate to **Integrations** > **Backup** > **Azure Storage**.
- Click **Create AZ Backup** to access the configuration form shown in the following illustration:

![Azure Configuration](/images/yp/cloud-provider-configuration-backup-azure.png)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Configure disaster recovery for a YugabyteDB Anywhere universe
headerTitle: xCluster Disaster recovery
headerTitle: xCluster Disaster Recovery
linkTitle: Disaster recovery
description: Enable deployment using transactional (active-standby) replication between universes
headContent: Fail over to a replica universe in case of unplanned outages
Expand All @@ -15,7 +15,7 @@ type: indexpage
showRightNav: true
---

Use xCluster disaster recovery (DR) to recover from an unplanned outage (failover) or to perform a planned switchover. Planned switchover is commonly used for business continuity and disaster recovery testing, and failback after a failover.
Use xCluster Disaster Recovery (DR) to recover from an unplanned outage (failover) or to perform a planned switchover. Planned switchover is commonly used for business continuity and disaster recovery testing, and failback after a failover.

A DR configuration consists of the following:

Expand Down Expand Up @@ -80,7 +80,7 @@ When [upgrading universes](../../manage-deployments/upgrade-software-install/) i

Note that switchover operations can potentially fail if the DR primary and replica are at different versions.

## xCluster DR vs xCluster replication
## xCluster DR vs xCluster Replication

xCluster refers to all YugabyteDB deployments with two or more universes, and has two major flavors:

Expand All @@ -95,28 +95,28 @@ xCluster DR targets one specific and common xCluster deployment model: [active-a

- Unidirectional replication means that at any moment in time, replication traffic flows in one direction, and is configured (and enforced) to flow only in one direction.

- Transactional SQL means that the application is using SQL (and not CQL), and write-ordering is guaranteed. Furthermore, transactions are guaranteed to be atomic.
- Transactional SQL means that the application is using SQL (and not CQL), and write-ordering is guaranteed for reads on the target. Furthermore, transactions are guaranteed to be atomic.

xCluster DR adds higher-level orchestration workflows to this deployment to make the end-to-end setup, switchover, and failover of the DR primary to DR replica simple and turnkey. This orchestration includes the following:

- During setup, xCluster DR ensures that both universes have identical copies of the data (using backup and restore to synchronize), and configures the DR replica to be read-only.
- During switchover, xCluster DR waits for all remaining changes on the DR primary to be replicated to the DR replica before switching over.
- During both switchover and failover, xCluster DR also promotes the DR replica from read only to read and write, and demotes (when possible) the original DR primary from read and write to read only.
- During both switchover and failover, xCluster DR promotes the DR replica from read only to read and write; during switchover, xCluster DR demotes (when possible) the original DR primary from read and write to read only.

For all deployment models _other than_ active-active single-master, unidirectional replication configured at any moment in time, for transactional YSQL, use xCluster replication directly instead of xCluster DR.
For all deployment models _other than_ active-active single-master, unidirectional replication configured at any moment in time, for transactional YSQL, use xCluster Replication directly instead of xCluster DR.

For example, use xCluster replication for the following:
For example, use xCluster Replication for the following deployments:

- Multi-master deployments, where you have two application instances, each one writing to a different universe.
- Active-active single-master deployments in which a single master application can freely write (without coordinating with YugabyteDB for failover or switchover) to either universe, because both accept writes.
- Multi-master (bidirectional), where you have two application instances, each one writing to a different universe.
- Active-active single-master, in which a single master application can freely write (without coordinating with YugabyteDB for failover or switchover) to either universe, because both accept writes.
- Non-transactional SQL. That is, SQL without write-order guarantees and without transactional atomicity guarantees.
- CQL
- CQL.

Note that a universe configured for xCluster DR cannot be used for xCluster replication, and vice versa. Although xCluster DR uses xCluster replication under the hood, xCluster DR replication is managed exclusively from the **xCluster Disaster Recovery** tab, and not on the **xCluster Replication** tab.
Note that a universe configured for xCluster DR cannot be used for xCluster Replication, and vice versa. Although xCluster DR uses xCluster Replication under the hood, xCluster DR replication is managed exclusively from the **xCluster Disaster Recovery** tab, and not on the **xCluster Replication** tab.

(As an alternative to xCluster DR, you can perform setup, failover, and switchover manually. Refer to [Set up transactional xCluster replication](../../../deploy/multi-dc/async-replication/async-transactional-setup/).)
(As an alternative to xCluster DR, you can perform setup, failover, and switchover manually. Refer to [Set up transactional xCluster Replication](../../../deploy/multi-dc/async-replication/async-transactional-setup/).)

For more information on xCluster replication in YugabyteDB, see the following:
For more information on xCluster Replication in YugabyteDB, see the following:

- [xCluster replication: overview and architecture](../../../architecture/docdb-replication/async-replication/)
- [xCluster replication between universes in YugabyteDB](../../../deploy/multi-dc/async-replication/)
- [xCluster Replication: overview and architecture](../../../architecture/docdb-replication/async-replication/)
- [xCluster Replication between universes in YugabyteDB](../../../deploy/multi-dc/async-replication/)
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,6 @@ You can restore YugabyteDB universe data from a backup as follows:

If you are restoring a backup to a universe with an existing databases of the same name, you must rename the database.

1. Optionally, specify the number of parallel threads that are allowed to run. This can be any number between `1` and `100`.

1. If you are restoring data from a universe that has tablespaces, select the **Restore tablespaces and data to their respective regions** option.

To restore tablespaces, the target universe must have a topology that matches the source.
Expand Down Expand Up @@ -148,7 +146,7 @@ To perform an advanced restore, on the YugabyteDB Anywhere installation where yo

1. On the **Backups** tab of the universe to which you want to restore, click **Advanced** and choose **Advanced Restore** to display the **Advanced Restore** dialog.

![Restore advanced](/images/yp/restore-advanced-ycql.png)
![Restore advanced](/images/yp/restore-advanced-ycql-2.20.png)

1. Choose the type of API.

Expand Down Expand Up @@ -182,8 +180,6 @@ To perform an advanced restore, on the YugabyteDB Anywhere installation where yo
1. If the backup involved universes that had [encryption at rest enabled](../../security/enable-encryption-at-rest), then select the KMS configuration to use.
1. If you are using YugabyteDB Anywhere version prior to 2.16 to manage universes with YugabyteDB version prior to 2.16, you can optionally specify the number of parallel threads that are allowed to run. This can be any number between 1 and 100.
1. If you chose to rename databases/keyspaces, click **Next**, then enter new names for the databases/keyspaces that you want to rename.
1. Click **Restore**.
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Before scheduling a backup of your universe data, create a policy, as follows:

1. Click **Create Scheduled Backup Policy** to open the dialog shown in the following illustration:

![Create Backup form](/images/yp/scheduled-backup-ycql-1.png)
![Create Scheduled Backup](/images/yp/scheduled-backup-ysql.png)

1. Provide the backup policy name.

Expand All @@ -43,6 +43,10 @@ Before scheduling a backup of your universe data, create a policy, as follows:

1. For YCQL backups, you can choose to back up all tables in the keyspace to which the database belongs or only certain tables. If you choose **Select a subset of tables**, a **Select Tables** dialog opens allowing you to select one or more tables to back up. When finished, click **Confirm**.

1. For YSQL backups of universes with geo-partitioning, you can choose to back up the tablespaces. Select the **Backup tablespaces information** option.

If you don't choose to back up tablespaces, the tablespaces are not preserved and their data is backed up to the primary region.

1. Specify the period of time during which the backup is to be retained. Note that there's an option to never delete the backup.

1. Specify the interval between backups or select **Use cron expression (UTC)**.
Expand All @@ -57,8 +61,6 @@ Before scheduling a backup of your universe data, create a policy, as follows:

You cannot modify any incremental backup-related property in the schedule; to overwrite any incremental backup property, you have to delete the existing schedule and create a new schedule if needed.

1. If you are using YugabyteDB Anywhere version prior to 2.16 to manage universes with YugabyteDB version prior to 2.16, you can optionally specify the number of threads that should be available for the backup process.

1. Click **Create**.

Subsequent backups are created based on the value you specified for **Set backup intervals** or **Use cron expression**.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ For more information on setting up an AWS service account and security groups, r

## Configure AWS

Navigate to **Configs > Infrastructure > Amazon Web Services** to see a list of all currently configured AWS providers.
Navigate to **Integrations > Infrastructure > Amazon Web Services** to see a list of all currently configured AWS providers.

### Create a provider

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ For more information on setting up an Azure account and resource groups, refer t

## Configure Azure

Navigate to **Configs > Infrastructure > Microsoft Azure** to see a list of all currently configured Azure providers.
Navigate to **Integrations > Infrastructure > Microsoft Azure** to see a list of all currently configured Azure providers.

### Create a provider

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ For more information on setting up a GCP service account, refer to [Cloud permis

## Configure GCP

Navigate to **Configs > Infrastructure > Google Cloud Platform** to see a list of all currently configured GCP providers.
Navigate to **Integrations > Infrastructure > Google Cloud Platform** to see a list of all currently configured GCP providers.

### Create a provider

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Refer to [To deploy nodes](../../prepare/cloud-permissions/cloud-permissions-nod

## Configure Kubernetes

Navigate to **Configs > Infrastructure > Managed Kubernetes Service** to see a list of all currently configured Kubernetes providers.
Navigate to **Integrations > Infrastructure > Managed Kubernetes Service** to see a list of all currently configured Kubernetes providers.

### View and edit providers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ type: docs

After creating the on-premises provider, you can add instances to its free pool of nodes.

1. Navigate to **Configs > Infrastructure > On-Premises Datacenters**, and select the on-premises configuration you created.
1. Navigate to **Integrations > Infrastructure > On-Premises Datacenters**, and select the on-premises configuration you created.
1. Select **Instances**.

This displays the configured instance types and instances for the selected provider.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Before you can deploy universes to private clouds using YugabyteDB Anywhere (YBA

With on-premises providers, VMs are _not_ auto-created by YBA; you must create a provider, manually create your VMs, and then add them to the provider's free pool of nodes.

Navigate to **Configs > Infrastructure > On-Premises Datacenters** to see a list of all currently configured on-premises providers.
Navigate to **Integrations > Infrastructure > On-Premises Datacenters** to see a list of all currently configured on-premises providers.

## Create a provider

Expand Down
Loading

0 comments on commit 356538f

Please sign in to comment.