diff --git a/.github/vale-styles/Yugabyte/spelling-exceptions.txt b/.github/vale-styles/Yugabyte/spelling-exceptions.txt index e3f58836c149..769589ede120 100644 --- a/.github/vale-styles/Yugabyte/spelling-exceptions.txt +++ b/.github/vale-styles/Yugabyte/spelling-exceptions.txt @@ -59,6 +59,7 @@ backport backported backporting backports +backquote backtrace backtraced backtraces @@ -138,6 +139,7 @@ crosslinked crosslinking crosslinks Crossplane +crosstab CrowdIn CSV Cutover @@ -878,6 +880,8 @@ YBase ycqlsh YouTrack ysqlsh +ysql_dump +ysql_dumpall ytt Yubico Yugabyte diff --git a/docs/content/preview/admin/yb-ctl.md b/docs/content/preview/admin/yb-ctl.md index 11b6cd30ec03..af80f85368b5 100644 --- a/docs/content/preview/admin/yb-ctl.md +++ b/docs/content/preview/admin/yb-ctl.md @@ -157,7 +157,7 @@ For details and examples, see [Create a local cluster with custom flags](#create **Example** -To enable [YSQL authentication](../../secure/enable-authentication/ysql/), you can use the `--tserver_flags` flag to add the `yb-tserver` [`--ysql_enable-auth`](../yb-tserver/#ysql-enable-auth) flag to the `yb-ctl create | start | restart` commands. +To enable [YSQL authentication](../../secure/enable-authentication/ysql/), you can use the `--tserver_flags` flag to add the `yb-tserver` [`--ysql_enable_auth`](../yb-tserver/#ysql-enable-auth) flag to the `yb-ctl create | start | restart` commands. ```sh $./bin/yb-ctl create --tserver_flags "ysql_enable_auth=true" diff --git a/docs/content/preview/admin/yb-ts-cli.md b/docs/content/preview/admin/yb-ts-cli.md index 412a15210961..c9abf606b30d 100644 --- a/docs/content/preview/admin/yb-ts-cli.md +++ b/docs/content/preview/admin/yb-ts-cli.md @@ -106,7 +106,7 @@ yb-ts-cli [ --server_address=: ] compact_tablet ##### count_intents -Print the count of uncommitted intents (or [provisional records](../../../architecture/transactions/distributed-txns/#provisional-records)). Useful for debugging transactional workloads. +Print the count of uncommitted intents (or [provisional records](../../architecture/transactions/distributed-txns/#provisional-records)). Helpful for debugging transactional workloads. **Syntax** @@ -118,7 +118,7 @@ yb-ts-cli [ --server_address=: ] count_intents ##### current_hybrid_time -Prints the value of the current [hybrid time](../../../architecture/transactions/single-row-transactions/#hybrid-time-as-an-mvcc-timestamp). +Prints the value of the current [hybrid time](../../architecture/transactions/transactions-overview/#mvcc-using-hybrid-time). **Syntax** @@ -140,7 +140,7 @@ yb-ts-cli [ --server_address=: ] delete_tablet ": ] set_flag [ --force ] * *host*:*port*: The *host* and *port* of the tablet server. Default is `localhost:9100`. * `--force`: Flag to allow a change to a flag that is not explicitly marked as runtime-settable. Note that the change may be ignored on the server or may cause the server to crash, if unsafe values are provided. See [--force](#force). -* *flag*: The `yb-tserver` configuration flag (without the `--` prefix) to be set. See [`yb-tserver`](../../reference/configuration/yb-tserver/#configuration-flags) +* *flag*: The `yb-tserver` configuration flag (without the `--` prefix) to be set. See [`yb-tserver`](../../reference/configuration/yb-tserver/) * *value*: The value to be applied. {{< note title="Important" >}} The `set_flag` command changes the in-memory value of the specified flag, atomically, for a running server and can alter its behavior. **The change does NOT persist across restarts.** -In practice, there are some flags that are runtime safe to change (runtime-settable) and some that are not. For example, the bind address of the server cannot be changed at runtime, since the server binds just once at startup. While most of the flags are probably runtime-settable, you need to review the flags and note in the configuration pages which flags are not runtime-settable. (See GitHub issue [#3534](https://github.com/yugabyte/yugabyte-db/issues/3534)). +In practice, there are some flags that are runtime safe to change (runtime-settable) and some that are not. For example, the bind address of the server cannot be changed at runtime, because the server binds just once at startup. While most of the flags are probably runtime-settable, you need to review the flags and note in the configuration pages which flags are not runtime-settable. (See GitHub issue [#3534](https://github.com/yugabyte/yugabyte-db/issues/3534)). One typical operational flow is that you can use this to modify runtime flags in memory and then out of band also modify the configuration file that the server uses to start. This allows for flags to be changed on running servers, without executing a restart of the server. @@ -261,7 +261,7 @@ For an example, see [Return the status of a tablet server](#return-the-status-of ##### refresh_flags -Refresh flags that are loaded from the configuration file. Works on both YB-Master (port 9100) and YB-TServer (port 7100) process. No parameters needed. +Refresh flags that are loaded from the configuration file. Works on both YB-Master (port 9100) and YB-TServer (port 7100) process. No parameters needed. Each process needs to have the following command issued, for example, issuing the command on one YB-TServer won't update the flags on the other YB-TServers. @@ -301,7 +301,6 @@ To connect to a cluster with TLS enabled, you must include the `--certs_dir_name Default: `""` - ## Examples ### Return the status of a tablet server diff --git a/docs/content/preview/admin/ycqlsh.md b/docs/content/preview/admin/ycqlsh.md index 83ea5c4e8f05..8390892b8c39 100644 --- a/docs/content/preview/admin/ycqlsh.md +++ b/docs/content/preview/admin/ycqlsh.md @@ -68,7 +68,7 @@ ycqlsh [flags] [host [port]] Where -- `host` is the IP address of the host on which [YB-TServer](../../architecture/concepts/universe/#yb-tserver-process) is run. The default is local host at `127.0.0.1`. +- `host` is the IP address of the host on which [YB-TServer](../../architecture/concepts/universe/#component-services) is run. The default is local host at `127.0.0.1`. - `port` is the TCP port at which YB-TServer listens for YCQL connections. The default is `9042`. ### Example diff --git a/docs/content/preview/admin/ysql-dump.md b/docs/content/preview/admin/ysql-dump.md index a60ae17bcb53..17657a26ff68 100644 --- a/docs/content/preview/admin/ysql-dump.md +++ b/docs/content/preview/admin/ysql-dump.md @@ -18,7 +18,7 @@ ysql_dump is a utility for backing up a YugabyteDB database into a plain-text, S ysql_dump only dumps a single database. To backup global objects that are common to all databases in a cluster, such as roles, use [ysql_dumpall](../ysql-dumpall/). -Dumps are output in plain-text, SQL script files. Script dumps are plain-text files containing the SQL statements required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, import it using the [`ysqlsh \i`](../ysqlsh-meta-commands/#-i-filename-include-filename) meta-command. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products. +Dumps are output in plain-text, SQL script files. Script dumps are plain-text files containing the SQL statements required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, import it using the [`ysqlsh \i`](../ysqlsh-meta-commands/#i-filename-include-filename) meta-command. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products. While running ysql_dump, you should examine the output for any warnings (printed on standard error). @@ -120,7 +120,7 @@ Dump only the object definitions (schema), not data. This option is the inverse of [`-a|--data-only`](#a-data-only). -(Do not confuse this with the [`-n|--schema`](#n-schema-schema-schema) option, which uses the word “schema” in a different meaning.) +(Do not confuse this with the [`-n|--schema`](#n-schema-schema-schema) option, which uses the word "schema" in a different meaning.) To exclude table data for only a subset of tables in the database, see [`--exclude-table-data`](#exclude-table-data). @@ -142,9 +142,9 @@ When `-t|--table` is specified, ysql_dump makes no attempt to dump any other dat #### -T *table*, --exclude-table=*table* -Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for [`-t`](#t-table). [`-T|--exclude-table`](#T-table-exclude-table-table) can be given more than once to exclude tables matching any of several patterns. +Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for [`-t`](#t-table-table-table). `-T|--exclude-table` can be given more than once to exclude tables matching any of several patterns. -When both [`-t|--table`](#t-table-table-table) and `-T|--exclude-table` are given, the behavior is to dump just the tables that match at least one [`-t|--table`](#t-table-table-table) option but no `-T|--exclude-table` options. If `-T|--exclude-table` appears without `-t|--table`, then tables matching `-T|--exclude-table` are excluded from what is otherwise a normal dump. +When both `-t|--table` and `-T|--exclude-table` are given, the behavior is to dump just the tables that match at least one `-t|--table` option but no `-T|--exclude-table` options. If `-T|--exclude-table` appears without `-t|--table`, then tables matching `-T|--exclude-table` are excluded from what is otherwise a normal dump. #### -v, --verbose @@ -288,7 +288,7 @@ This option is never essential, as ysql_dump automatically prompts for a passwor #### --role=*rolename* -Specifies a role name to be used to create the dump. This option causes ysql_dump to issue a `SET ROLE ` statement after connecting to the database. It is useful when the authenticated user (specified by [`-U|--username`](#u-username)) lacks privileges needed by ysql_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy. +Specifies a role name to be used to create the dump. This option causes ysql_dump to issue a `SET ROLE ` statement after connecting to the database. It is useful when the authenticated user (specified by [`-U|--username`](#u-username-username-username)) lacks privileges needed by ysql_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy. ## Environment diff --git a/docs/content/preview/admin/ysql-dumpall.md b/docs/content/preview/admin/ysql-dumpall.md index bbc526aab02a..d08bed24881d 100644 --- a/docs/content/preview/admin/ysql-dumpall.md +++ b/docs/content/preview/admin/ysql-dumpall.md @@ -14,11 +14,11 @@ type: docs ## Overview -ysql_dumpall is a utility for writing out (“dumping”) all YugabyteDB databases of a cluster into one plain-text, SQL script file. The script file contains SQL statements that can be used as input to `ysqlsh` to restore the databases. It does this by calling [ysql_dump](../ysql-dump/) for each database in the YugabyteDB cluster. ysql_dumpall also dumps global objects that are common to all databases, such as database roles. (ysql_dump does not export roles.) +ysql_dumpall is a utility for writing out ("dumping") all YugabyteDB databases of a cluster into one plain-text, SQL script file. The script file contains SQL statements that can be used as input to `ysqlsh` to restore the databases. It does this by calling [ysql_dump](../ysql-dump/) for each database in the YugabyteDB cluster. ysql_dumpall also dumps global objects that are common to all databases, such as database roles. (ysql_dump does not export roles.) Because ysql_dumpall reads tables from all databases, you will most likely have to connect as a database superuser in order to produce a complete dump. Also, you will need superuser privileges to execute the saved script in order to be allowed to add roles and create databases. -The SQL script will be written to the standard output. Use the [`-f|--file`](#f-file-filename) option or shell operators to redirect it into a file. +The SQL script will be written to the standard output. Use the [`-f|--file`](#f-filename-file-filename) option or shell operators to redirect it into a file. ysql_dumpall needs to connect multiple times (once per database) to the YugabyteDB cluster. If you use password authentication, it will ask for a password each time. It is convenient to have a `~/.pgpass` file in such cases. @@ -173,7 +173,7 @@ The following command line options control the database connection parameters. Specifies parameters used to connect to the server, as a connection string. -The option is called `-d|--dbname` for consistency with other client applications, but because ysql_dumpall needs to connect to many databases, the database name in the connection string will be ignored. Use the [`-l|--database`](#l-database-database) option to specify the name of the database used for the initial connection, which will dump global objects and discover what other databases should be dumped. +The option is called `-d|--dbname` for consistency with other client applications, but because ysql_dumpall needs to connect to many databases, the database name in the connection string will be ignored. Use the [`-l|--database`](#l-dbname-database-database) option to specify the name of the database used for the initial connection, which will dump global objects and discover what other databases should be dumped. #### -h *host*, --host *host* @@ -193,7 +193,7 @@ The username to connect as. #### -w, --no-password -Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `~/.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password. +Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `~/.pgpass` file, the connection attempt will fail. This option can be helpful in batch jobs and scripts where no user is present to enter a password. #### -W, --password @@ -209,7 +209,7 @@ For each database to be dumped, a password prompt will occur. To avoid having to #### --role=*rolename* -Specifies a role name to be used to create the dump. This option causes ysql_dumpall to issue a `SET ROLE ` statement after connecting to the database. It is helpful when the authenticated user (specified by [`-U|--username`](#u-username-username)) lacks privileges needed by ysql_dumpall, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy. +Specifies a role name to be used to create the dump. This option causes ysql_dumpall to issue a `SET ROLE ` statement after connecting to the database. It is helpful when the authenticated user (specified by [`-U|--username`](#u-username-username-username)) lacks privileges needed by ysql_dumpall, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy. ## Environment diff --git a/docs/content/preview/admin/ysqlsh-meta-commands.md b/docs/content/preview/admin/ysqlsh-meta-commands.md index aa9ebd505c71..e59ea5a8f406 100644 --- a/docs/content/preview/admin/ysqlsh-meta-commands.md +++ b/docs/content/preview/admin/ysqlsh-meta-commands.md @@ -355,7 +355,7 @@ Unlike most other meta-commands, the entire remainder of the line is always take ##### \f [ string ] -Sets the field separator for unaligned query output. The default is the vertical bar (`|`). It is equivalent to [\pset fieldsep](../ysqlsh-pset-options/#pset-option-value). +Sets the field separator for unaligned query output. The default is the vertical bar (`|`). It is equivalent to [\pset fieldsep](../ysqlsh-pset-options/#fieldsep). ##### \g [ filename ], \g [ |command ] @@ -553,7 +553,7 @@ Sets options affecting the output of query result tables. *option* indicates whi `\pset` without any arguments displays the current status of all printing options. -The *options* are defined in [pset options](../ysqlsh-pset-options/#pset-options). +The *options* are defined in [pset options](../ysqlsh-pset-options/). For examples using `\pset`, see [ysqlsh meta-command examples](../ysqlsh-meta-examples/). diff --git a/docs/content/preview/admin/ysqlsh-meta-examples.md b/docs/content/preview/admin/ysqlsh-meta-examples.md index dec093037b43..c8b0d673a3c2 100644 --- a/docs/content/preview/admin/ysqlsh-meta-examples.md +++ b/docs/content/preview/admin/ysqlsh-meta-examples.md @@ -140,7 +140,7 @@ SELECT t1.first as "A", t2.first+100 AS "B", t1.first*(t2.first+100) as "AxB", ## pset -You can display tables in different ways by using the [`\pset`](../ysqlsh-pset-options/#pset-option-value) command: +You can display tables in different ways by using the [`\pset`](../ysqlsh-pset-options/) command: ```sql \pset border 2 diff --git a/docs/content/preview/admin/ysqlsh.md b/docs/content/preview/admin/ysqlsh.md index 3447fd873310..34701b0d336f 100644 --- a/docs/content/preview/admin/ysqlsh.md +++ b/docs/content/preview/admin/ysqlsh.md @@ -259,7 +259,7 @@ Turn on HTML tabular output. This is equivalent to [\pset format html](../ysqlsh ##### -l, --list -List all available databases, then exit. Other non-connection options are ignored. This is similar to the meta-command [`\list`](../ysqlsh-meta-commands/#l-list-pattern). +List all available databases, then exit. Other non-connection options are ignored. This is similar to the meta-command [`\list`](../ysqlsh-meta-commands/#l-list-pattern-patterns). When this option is used, ysqlsh connects to the database `yugabyte`, unless a different database is named on the command line (flag `-d` or non-option argument, possibly using a service entry, but not using an environment variable). diff --git a/docs/content/preview/architecture/core-functions/read-path.md b/docs/content/preview/architecture/core-functions/read-path.md index 5f808451184e..692f61f07670 100644 --- a/docs/content/preview/architecture/core-functions/read-path.md +++ b/docs/content/preview/architecture/core-functions/read-path.md @@ -11,19 +11,19 @@ menu: type: docs --- -The read I/O path can be illustrated by an example of a single key read that involves indentifying a tablet leader which then performs a read operation. +The read I/O path can be illustrated by an example of a single key read that involves identifying a tablet leader which then performs a read operation. ## Tablet leader identification The user-issued read request interacts with the YQL query layer via a port with the appropriate API (either YSQL or YCQL). This user request is translated by the YQL layer into an internal key, allowing the YQL layer to find the tablet and the YB-TServers hosting it. The YQL layer performs this by making an RPC call to the YB-Master. The response is cached for future uses. Next, the YQL layer issues the read to the YB-TServer that hosts the leader tablet peer. The read is handled by the leader of the Raft group of the tablet owning the internal key. The leader of the tablet Raft group which handles the read request reads from its DocDB and returns the result to the user. -As described in [Write I/O path](../write-path/#step-1-identify-tablet-leader), the YugabyteDB smart client can route the application requests directly to the correct YB-TServer, avoiding any extra network hops or master lookups. +As described in [Write I/O path](../write-path/), the YugabyteDB smart client can route the application requests directly to the correct YB-TServer, avoiding any extra network hops or master lookups. ## Read operation performed by tablet leader Suppose there is a requirement to read the value where the primary key column `K` has a value `k` from table `T1`. The table `T1` has a key column `K` and a value column `V`. The following diagram depicts the read flow: -![read_path_io](/images/architecture/read_path_io.png) +![Read path](/images/architecture/read_path_io.png) The default is strongly-consistent read. diff --git a/docs/content/preview/architecture/docdb-replication/async-replication.md b/docs/content/preview/architecture/docdb-replication/async-replication.md index 4d19c615adf3..9bc18a6c59ae 100644 --- a/docs/content/preview/architecture/docdb-replication/async-replication.md +++ b/docs/content/preview/architecture/docdb-replication/async-replication.md @@ -16,7 +16,7 @@ type: docs ## Synchronous versus asynchronous replication -YugabyteDB's [synchronous replication](../replication/) can be used to tolerate losing entire data centers or regions. It replicates data within a single universe spread across multiple (three or more) data centers so that the loss of one data center does not impact availability, durability, or strong consistency courtesy of the Raft consensus algorithm. +YugabyteDB's [synchronous replication](../replication/) can be used to tolerate losing entire data centers or regions. It replicates data in a single universe spread across multiple (three or more) data centers so that the loss of one data center does not impact availability, durability, or strong consistency courtesy of the Raft consensus algorithm. However, synchronous replication has two important drawbacks when used this way: @@ -75,7 +75,7 @@ Note that these inconsistencies are limited to the tables/rows being written to This mode is an extension of the previous one. In order to restore consistency, we additionally disallow writes on the target universe and cause reads to read as of a time far enough in the past (typically 250 ms) that all the relevant data from the source universe has already been replicated. -In particular, we pick the time to read as of, _T_, so that all the writes from all the source transactions that will commit at or before time _T_ have been replicated to the target universe. Put another way, we read as of a time far enough in the past that there cannot be new incoming source commits at or before that time. This restores consistent reads and ensures source universe transaction results become visible atomically. Note that we do *not* wait for any current in flight source-universe transactions. +In particular, we pick the time to read as of, _T_, so that all the writes from all the source transactions that will commit at or before time _T_ have been replicated to the target universe. Put another way, we read as of a time far enough in the past that there cannot be new incoming source commits at or before that time. This restores consistent reads and ensures source universe transaction results become visible atomically. Note that we do _not_ wait for any current in flight source-universe transactions. In order to know when to read as of, we maintain an analog of safe time called _xCluster safe time_, which is the latest time it is currently safe to read as of with xCluster transactional replication in order to guarantee consistency and atomicity. xCluster safe time advances as replication proceeds but lags behind real-time by the current replication lag. This means, for example, if we write at 2 PM in the source universe and read at 2:01 PM in the target universe and replication lag is say five minutes then the read will read as of 1:56 PM and will not see the write. We won't be able to see the write until 2:06 PM in the target universe assuming the replication lag remains at five minutes. @@ -116,7 +116,7 @@ Tablet splitting generates a Raft log entry, which is replicated to the target s These are straightforward: when one of these transaction commits, a single Raft log entry is produced containing all of that transaction's writes and its commit time. This entry in turn is used to generate part of a batch of changes when the poller requests changes. -Upon receiving the changes, the poller examines each write to see what key it writes to in order to determine which target tablet covers that part of the table. The poller then forwards the writes to the appropriate tablets. The commit times of the writes are preserved and the writes are marked as _external_, which prevents them from being further replicated by xCluster, whether onwards to an additional cluster or back to the cluster they came from in bidirectional cases. +Upon receiving the changes, the poller examines each write to see what key it writes to in order to determine which target tablet covers that part of the table. The poller then forwards the writes to the appropriate tablets. The commit times of the writes are preserved and the writes are marked as _external_, which prevents them from being further replicated by xCluster, whether onward to an additional cluster or back to the cluster they came from in bidirectional cases. ### Distributed transactions @@ -136,18 +136,17 @@ xCluster safe time is computed for each database by the target-universe master l A source tablet server sends such information when it determines that no active transaction involving that tablet can commit before _T_ and that all transactions involving that tablet that committed before _T_ have application Raft entries that have been previously sent as changes. It also periodically (currently 250 ms) checks for committed transactions that are missing apply Raft entries and generates such entries for them; this helps xCluster safe time advance faster. - ## Schema differences xCluster replication does not support replicating between two copies of a table with different schemas. For example, you cannot replicate a table to a version of that table missing a column or with a column having a different type. More subtly, this restriction extends to hidden schema metadata like the assignment of column IDs to columns. Just because two tables show the same schema in YSQL does not mean their schemas are actually identical. Because of this, in practice the target table schema needs to be copied from that of the source table; see [replication bootstrapping](#replication-bootstrapping) for how this is done. -Because of this restriction, xCluster does not need to do a deep translation of row contents (e.g., dropping columns or translating column IDs inside of keys and values) as rows are replicated between universes. Avoiding deep translation simplifies the code and reduces the cost of replication. +Because of this restriction, xCluster does not need to do a deep translation of row contents (for example, dropping columns or translating column IDs inside of keys and values) as rows are replicated between universes. Avoiding deep translation simplifies the code and reduces the cost of replication. ### Supporting schema changes -Today, this is a manual process where the exact same schema change must be manually made on first one side then the other. Replication of the given table automatically pauses while schema differences are detected and resumes once the schemas are the same again. +Currently, this is a manual process where the exact same schema change must be manually made on first one side then the other. Replication of the given table automatically pauses while schema differences are detected and resumes after the schemas are the same again. Ongoing work, [#11537](https://github.com/yugabyte/yugabyte-db/issues/11537), will make this automatic: schema changes made on the source universe will automatically be replicated to the target universe and made, allowing replication to continue running without operator intervention. @@ -227,7 +226,7 @@ Because of this applications using active-active should avoid `UNIQUE` indexes a In the future, it may be possible to detect such unsafe constraints and issue a warning, potentially by default. This is tracked in [#11539](https://github.com/yugabyte/yugabyte-db/issues/11539). -Note that if you attempt to insert the same row on both universes at the same time to a table that does not have a primary key then you will end up with two rows with the same data. This is the expected PostgresSQL behavior — tables without primary keys can have multiple rows with the same data. +Note that if you attempt to insert the same row on both universes at the same time to a table that does not have a primary key then you will end up with two rows with the same data. This is the expected PostgreSQL behavior — tables without primary keys can have multiple rows with the same data. ### Non-transactional–mode consistency issues @@ -251,7 +250,7 @@ When the source universe is lost, an explicit decision must be made to switch ov ### Bootstrapping replication -- Currently, it is your responsibility to ensure that a target universe has sufficiently recent updates so that replication can safely resume (for instructions, refer to [Bootstrap a target universe](../../../deploy/multi-dc/async-replication/#bootstrap-a-target-universe)). In the future, bootstrapping the target universe will be automated, which is tracked in [#11538](https://github.com/yugabyte/yugabyte-db/issues/11538). +- Currently, it is your responsibility to ensure that a target universe has sufficiently recent updates so that replication can safely resume (for instructions, refer to [Bootstrap a target universe](../../../deploy/multi-dc/async-replication/async-deployment/#bootstrap-a-target-universe)). In the future, bootstrapping the target universe will be automated, which is tracked in [#11538](https://github.com/yugabyte/yugabyte-db/issues/11538). - Bootstrap currently relies on the underlying backup and restore (BAR) mechanism of YugabyteDB. This means it also inherits all of the limitations of BAR. For YSQL, currently the scope of BAR is at a database level, while the scope of replication is at table level. This implies that when you bootstrap a target universe, you automatically bring any tables from the source database to the target database, even the ones that you might not plan to actually configure replication on. This is tracked in [#11536](https://github.com/yugabyte/yugabyte-db/issues/11536). ### DDL changes diff --git a/docs/content/preview/architecture/docdb/persistence.md b/docs/content/preview/architecture/docdb/persistence.md index 37dd744c9f8e..91aec4d25dbb 100644 --- a/docs/content/preview/architecture/docdb/persistence.md +++ b/docs/content/preview/architecture/docdb/persistence.md @@ -19,7 +19,7 @@ Once data is replicated using Raft across a majority of the YugabyteDB tablet-pe This storage layer is a persistent key-to-object (or to-document) store. The following diagram depicts the storage model where not every element is always present: -![cql_row_encoding](/images/architecture/cql_row_encoding.png) +![Storage model](/images/architecture/cql_row_encoding.png) ### DocDB key diff --git a/docs/content/preview/architecture/transactions/concurrency-control.md b/docs/content/preview/architecture/transactions/concurrency-control.md index c36514c01897..5700ea1983a0 100644 --- a/docs/content/preview/architecture/transactions/concurrency-control.md +++ b/docs/content/preview/architecture/transactions/concurrency-control.md @@ -300,7 +300,7 @@ commit; ### Best-effort internal retries for first statement in a transaction -Note that we see the error message `All transparent retries exhausted` in the preceding example because if the transaction T1, when executing the first statement, finds another concurrent conflicting transaction with equal or higher priority, then T1 will perform a few retries with exponential backoff before giving up in anticipation that the other transaction will be done in some time. The number of retries are configurable by the `yb_max_query_layer_retries` session variable and the exponential backoff parameters are the same as the ones described in [Performance tuning](../read-committed/#performance-tuning). +Note that we see the error message `All transparent retries exhausted` in the preceding example because if the transaction T1, when executing the first statement, finds another concurrent conflicting transaction with equal or higher priority, then T1 will perform a few retries with exponential backoff before giving up in anticipation that the other transaction will be done in some time. The number of retries are configurable by the `yb_max_query_layer_retries` YSQL configuration parameter and the exponential backoff parameters are the same as the ones described in [Performance tuning](../read-committed/#performance-tuning). Each retry will use a newer snapshot of the database in anticipation that the conflicts might not occur. This is done because if the read time of the new snapshot is higher than the commit time of the earlier conflicting transaction T2, the conflicts with T2 would essentially be voided as T1 and T2 would no longer be "concurrent". @@ -1223,19 +1223,20 @@ commit; ### Versioning and upgrades -When turning `enable_wait_queues` on or off, or during a rolling restart, where during an update the flag could be on on nodes with a more recent version, if some nodes have wait-on-conflict behavior enabled and some don’t, you will experience mixed (but still correct) behavior. +When turning `enable_wait_queues` on or off, or during a rolling restart, where during an update the flag could be on nodes with a more recent version, if some nodes have wait-on-conflict behavior enabled and some don't, you will experience mixed (but still correct) behavior. A mix of both fail-on-conflict and wait-on-conflict traffic results in the following additional YSQL-specific semantics: - If a transaction using fail-on-conflict encounters transactions that have conflicting writes - - - If there is even a single conflicting transaction that uses wait-on-conflict, the transaction aborts. - - Otherwise, YugabyteDB uses the regular [fail-on-conflict semantics](#fail-on-conflict), which is to abort the lower priority transaction. + - If there is even a single conflicting transaction that uses wait-on-conflict, the transaction aborts. + - Otherwise, YugabyteDB uses the regular [fail-on-conflict semantics](#fail-on-conflict), which is to abort the lower priority transaction. - If a transaction using wait-on-conflict encounters transactions that have conflicting writes, it waits for all conflicting transactions to end (including any using fail-on-conflict semantics). ### Fairness When multiple requests are waiting on the same resource in the wait queue, and that resource becomes available, YugabyteDB generally uses the following process to decide in which order those waiting requests should get access to the contentious resource: -1. Sort all waiting requests based on the _transaction start time_, with requests from the oldest transactions first + +1. Sort all waiting requests based on the _transaction start time_, with requests from the oldest transactions first. 2. Resume requests in order: 1. Re-run conflict resolution and acquire locks on the requested resource. 2. If the resource is no longer available because another waiting request acquired it, re-enter the wait queue. diff --git a/docs/content/preview/architecture/transactions/distributed-txns.md b/docs/content/preview/architecture/transactions/distributed-txns.md index 06242b1ed5be..43332d804c1b 100644 --- a/docs/content/preview/architecture/transactions/distributed-txns.md +++ b/docs/content/preview/architecture/transactions/distributed-txns.md @@ -39,7 +39,7 @@ There are three types of RocksDB key-value pairs corresponding to provisional re DocumentKey, SubKey1, ..., SubKeyN, LockType, ProvisionalRecordHybridTime -> TxnId, Value ``` -The `DocumentKey`, `SubKey1`, ..., `SubKey` components exactly match those in DocDB's [encoding](../../docdb/persistence/#mapping-docdb-documents-to-rocksdb) of paths to a particular subdocument (for example, a row, a column, or an element in a collection-type column) to RocksDB keys. +The `DocumentKey`, `SubKey1`, ..., `SubKey` components exactly match those in DocDB's [encoding](../../docdb/persistence/#encoding-documents) of paths to a particular subdocument (for example, a row, a column, or an element in a collection-type column) to RocksDB keys. Each of these primary provisional records also acts as a persistent revocable lock. There are some similarities as well as differences when compared to [blocking in-memory locks](../isolation-levels/) maintained by every tablet's lock manager. These persistent locks can be of any of the same types as for in-memory leader-only locks (SI write, serializable write and read, and a separate strong and weak classification for handling nested document changes). However, unlike the leader-side in-memory locks, the locks represented by provisional records can be revoked by another conflicting transaction. The conflict resolution subsystem makes sure that for any two conflicting transactions, at least one of them is aborted. diff --git a/docs/content/preview/benchmark/ycsb-jdbc.md b/docs/content/preview/benchmark/ycsb-jdbc.md index d16439afb1ea..62ecb58d9bdc 100644 --- a/docs/content/preview/benchmark/ycsb-jdbc.md +++ b/docs/content/preview/benchmark/ycsb-jdbc.md @@ -202,4 +202,4 @@ When run on a 3-node cluster of `c5.4xlarge` AWS instances (16 cores, 32GB of RA | Workload E | 16,642 | 15ms scan | Not applicable | | Workload F | 29,500 | 2ms | 15ms read-modify-write | -For an additional example, refer to [Example: YCSB workload with automatic tablet splitting example](../../architecture/docdb-sharding/tablet-splitting/#example-ycsb-workload-with-automatic-tablet-splitting). +For an additional example, refer to [Example: YCSB workload with automatic tablet splitting example](../../architecture/docdb-sharding/tablet-splitting/#ycsb-workload-with-automatic-tablet-splitting-example). diff --git a/docs/content/preview/contribute/_index.md b/docs/content/preview/contribute/_index.md index e2d01a3a2e01..f72fa10fd86a 100644 --- a/docs/content/preview/contribute/_index.md +++ b/docs/content/preview/contribute/_index.md @@ -31,7 +31,7 @@ This is the C++ code and the unit tests that comprise the core of YugabyteDB. Yo ### Docs -[YugabyteDB documentation](/) uses the Hugo framework. There are two types of docs issues - infrastructure enhancements, and adding or modifying content. You can [follow the steps outlined here](docs/) to get set up and make a contribution. +YugabyteDB documentation uses the Hugo framework. There are two types of docs issues - infrastructure enhancements, and adding or modifying content. You can [follow the steps outlined here](docs/) to get set up and make a contribution. ## Find an issue diff --git a/docs/content/preview/contribute/docs/all-page-elements.md b/docs/content/preview/contribute/docs/all-page-elements.md index eed347afcb5e..fd4c6bf5d239 100644 --- a/docs/content/preview/contribute/docs/all-page-elements.md +++ b/docs/content/preview/contribute/docs/all-page-elements.md @@ -91,7 +91,7 @@ To build and run the application, do the following: The application needs to establish a connection to the YugabyteDB cluster. To do this: - - Set the following configuration parameters: + - Set the following connection parameters: - **host** - the host name of your YugabyteDB cluster. To obtain a YugabyteDB Managed cluster host name, sign in to YugabyteDB Managed, select your cluster on the **Clusters** page, and click **Settings**. The host is displayed under **Connection Parameters**. - **port** - the port number that will be used by the JDBC driver (the default YugabyteDB YSQL port is 5433). diff --git a/docs/content/preview/contribute/docs/syntax-diagrams.md b/docs/content/preview/contribute/docs/syntax-diagrams.md index c0481acedf45..aa56a2c6b86a 100644 --- a/docs/content/preview/contribute/docs/syntax-diagrams.md +++ b/docs/content/preview/contribute/docs/syntax-diagrams.md @@ -131,7 +131,7 @@ This would add the grammar and syntax tabs as follows: The syntax and grammar diagrams are generated in the same order as included in the `ebnf` shortcode. -Suppose that a syntax rule includes a reference to another syntax rule. If the referenced syntax rule is included in the same [_syntax diagram set_](#syntax-diagram-set), then the name of the syntax rule in the referring [_syntax diagram_](#syntax-diagram) becomes a link to the syntax rule in that same syntax diagram set. Otherwise the generated link target of the referring rule is in the [_grammar diagrams file_](#grammar-diagrams-file). The way that this link is spelled depends on the location, in the [_ysql directory_](#ysql-directory) tree, of the `.md` file that includes the generated syntax diagram. +Suppose that a syntax rule includes a reference to another syntax rule. If the referenced syntax rule is included in the same [_syntax diagram set_](#syntax-diagram-set), then the name of the syntax rule in the referring [_syntax diagram_](#syntax-diagram) becomes a link to the syntax rule in that same syntax diagram set. Otherwise the generated link target of the referring rule is in the [_grammar diagrams file_](#grammar-diagrams-file). The way that this link is spelled depends on the location of the `.md` file that includes the generated syntax diagram. In the case you have multiple syntax diagram sets on the same page and would like to cross-reference each other on the same page, specify the local rules that need to be cross referenced as comma separated values in the `localrefs` argument of the `ebnf` shortcode. For example, diff --git a/docs/content/preview/deploy/checklist.md b/docs/content/preview/deploy/checklist.md index ac00c60cd868..b151d12654d2 100644 --- a/docs/content/preview/deploy/checklist.md +++ b/docs/content/preview/deploy/checklist.md @@ -170,7 +170,7 @@ YugabyteDB can run on a number of public clouds. - Use the N2 high-CPU instance family. As a second choice, the N2 standard instance family can be used. - Recommended instance types are `n2-highcpu-16` and `n2-highcpu-32`. -- [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) are the preferred storage option, as they provide improved performance over attached disks, but the data is not replicated and can be lost if the node fails. This option is ideal for databases such as YugabyteDB that manage their own replication and can guarantee high availability (HA). For more details on these tradeoffs, refer to [Local vs remote SSDs](../../deploy/kubernetes/best-practices/#local-vs-remote-ssds). +- [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) are the preferred storage option, as they provide improved performance over attached disks, but the data is not replicated and can be lost if the node fails. This option is ideal for databases such as YugabyteDB that manage their own replication and can guarantee high availability (HA). For more details on these tradeoffs, refer to [Local vs remote SSDs](../../deploy/kubernetes/best-practices/#local-versus-remote-ssds). - Each local SSD is 375 GB in size, but you can attach up to eight local SSD devices for 3 TB of total local SSD storage space per instance. - As a second choice, [remote persistent SSDs](https://cloud.google.com/compute/docs/disks/#pdspecs) perform well. Make sure the size of these SSDs are at least 250GB in size, larger if more IOPS are needed: - The number of IOPS scale automatically in proportion to the size of the disk. @@ -179,6 +179,6 @@ YugabyteDB can run on a number of public clouds. ### Azure - Use v5 options with 16 vCPU in the Storage Optimized (preferred) or General Purpose VM types. For a busy YSQL instance, use 32 vCPU. -- For an application that cannot tolerate P99 spikes, local SSDs (Storage Optimized instances) are the preferred option. For more details on the tradeoffs, refer to [Local vs remote SSDs](../../deploy/kubernetes/best-practices/#local-vs-remote-ssds). +- For an application that cannot tolerate P99 spikes, local SSDs (Storage Optimized instances) are the preferred option. For more details on the tradeoffs, refer to [Local vs remote SSDs](../../deploy/kubernetes/best-practices/#local-versus-remote-ssds). - If local SSDs are not available, use ultra disks to eliminate expected latency on Azure premium disks. Refer to the Azure [disk recommendations](https://azure.microsoft.com/en-us/blog/azure-ultra-disk-storage-microsoft-s-service-for-your-most-i-o-demanding-workloads/) and Azure documentation on [disk types](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types) for databases. - Turn on Accelerated Networking, and use VNet peering for multiple VPCs and connectivity to object stores. diff --git a/docs/content/preview/deploy/manual-deployment/start-tservers.md b/docs/content/preview/deploy/manual-deployment/start-tservers.md index ac31962edb65..d1c0e8c12c31 100644 --- a/docs/content/preview/deploy/manual-deployment/start-tservers.md +++ b/docs/content/preview/deploy/manual-deployment/start-tservers.md @@ -101,7 +101,7 @@ Verify by running the following: $ curl -s http://:7000/cluster-config ``` -Confirm that the output looks similar to the following, with `min_num_replicas` set to `1` for each AZ: +Confirm that the output looks similar to the following, with `min_num_replicas` set to 1 for each AZ: ```output.json replication_info { diff --git a/docs/content/preview/deploy/multi-dc/3dc-deployment.md b/docs/content/preview/deploy/multi-dc/3dc-deployment.md index 61942fc11dd7..a5b6c6bfbed0 100644 --- a/docs/content/preview/deploy/multi-dc/3dc-deployment.md +++ b/docs/content/preview/deploy/multi-dc/3dc-deployment.md @@ -84,7 +84,7 @@ $ ./bin/yb-tserver \ >& /home/centos/disk1/yb-tserver.out & ``` -Note that all of the master addresses have to be provided using the [`--tserver_master_addrs`](../../../reference/configuration/yb-master/#tserver-master-addrs) flag. Replace the [`--rpc_bind_addresses`](../../../reference/configuration/yb-tserver/#rpc-bind-addresses) value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region`, and `placement_zone` values appropriately. +Note that all of the master addresses have to be provided using the [`--tserver_master_addrs`](../../../reference/configuration/yb-tserver/#tserver-master-addrs) flag. Replace the [`--rpc_bind_addresses`](../../../reference/configuration/yb-tserver/#rpc-bind-addresses) value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region`, and `placement_zone` values appropriately. As with the YB-Masters, set the [`--leader_failure_max_missed_heartbeat_periods`](../../../reference/configuration/yb-tserver/#leader-failure-max-missed-heartbeat-periods) flag to `10` to account for higher RPC latencies. @@ -109,7 +109,7 @@ Verify by running the following: $ curl -s http://:7000/cluster-config ``` -Confirm that the output looks similar to the following with [`--min_num_replicas`](../../../reference/configuration/yb-tserver/#min-num-replicas) set to `1` for each AZ: +Confirm that the output looks similar to the following with `min_num_replicas` set to 1 for each AZ: ```yaml replication_info { diff --git a/docs/content/preview/deploy/public-clouds/aws/manual-deployment.md b/docs/content/preview/deploy/public-clouds/aws/manual-deployment.md index d62d869b7cf5..849c4367d9af 100644 --- a/docs/content/preview/deploy/public-clouds/aws/manual-deployment.md +++ b/docs/content/preview/deploy/public-clouds/aws/manual-deployment.md @@ -576,7 +576,7 @@ $ curl -s http://:7000/cluster-config And confirm that the output looks similar to what is shown below with `min_num_replicas` set to 1 for each AZ. -```json +```yaml replication_info { live_replicas { num_replicas: 3 diff --git a/docs/content/preview/develop/best-practices-ycql.md b/docs/content/preview/develop/best-practices-ycql.md index e8cb7441e7ff..e0204b9bda4e 100644 --- a/docs/content/preview/develop/best-practices-ycql.md +++ b/docs/content/preview/develop/best-practices-ycql.md @@ -90,7 +90,7 @@ Collections are designed for storing small sets of values that are not expected ## Collections with many elements -Each element inside a collection ends up as a [separate key value](../../architecture/docdb/persistence#ycql-collection-type-example) in DocDB adding per-element overhead. +Each element inside a collection ends up as a [separate key value](../../architecture/docdb/persistence/#collection-type-examples-for-ycql) in DocDB adding per-element overhead. If your collections are immutable, or you update the whole collection in full, consider using the `JSONB` data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a `BLOB` column. diff --git a/docs/content/preview/develop/best-practices-ysql.md b/docs/content/preview/develop/best-practices-ysql.md index b9aa24d5eab0..89cb5cd011e1 100644 --- a/docs/content/preview/develop/best-practices-ysql.md +++ b/docs/content/preview/develop/best-practices-ysql.md @@ -255,7 +255,7 @@ For consistent latency or performance, it is recommended to size columns in the [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. {{}} -Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate#truncate). +Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). {{}} ## Number of tables and indexes diff --git a/docs/content/preview/develop/build-global-apps/_index.md b/docs/content/preview/develop/build-global-apps/_index.md index 084d9b1653f6..5b34ce5bacec 100644 --- a/docs/content/preview/develop/build-global-apps/_index.md +++ b/docs/content/preview/develop/build-global-apps/_index.md @@ -68,7 +68,7 @@ Depending on whether the application should read the latest data or stale data, ## Pick the right pattern -Use the following matrix to choose a [design pattern](#design-patterns-explained), based on the architectures described in the preceding section. +Use the following matrix to choose a [design pattern](#design-patterns), based on the architectures described in the preceding section. | Pattern Type | Follow the Application | Geo-Local Dataset | | ---------------------------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | diff --git a/docs/content/preview/develop/build-global-apps/active-active-single-master.md b/docs/content/preview/develop/build-global-apps/active-active-single-master.md index 6e2b8f492fc3..c9cfa5de7783 100644 --- a/docs/content/preview/develop/build-global-apps/active-active-single-master.md +++ b/docs/content/preview/develop/build-global-apps/active-active-single-master.md @@ -46,7 +46,7 @@ Writes still have to go to the primary cluster in `us-west`. ## Transactional consistency -You can preserve and guarantee transactional atomicity and global ordering when propagating change data from one universe to another by adding the `transactional` flag when setting up the [xCluster replication](../../../deploy/multi-dc/async-replication-transactional/#set-up-unidirectional-transactional-replication). This is the default behavior. +You can preserve and guarantee transactional atomicity and global ordering when propagating change data from one universe to another by adding the `transactional` flag when setting up the [xCluster replication](../../../deploy/multi-dc/async-replication/async-transactional-setup/). This is the default behavior. You can relax the transactional atomicity guarantee for lower replication lag. diff --git a/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md b/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md index e5b184415f3a..e0269783fda3 100644 --- a/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md +++ b/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md @@ -143,12 +143,12 @@ For an example application and try it out yourself, see [Designing a Retry Mecha ## Tuning for high performance -All applications need to be tuned to get the best performance. YugabyteDB supports various constructs and [multiple settings](../transactions-performance-ysql/#session-level-settings) that can be adopted and tuned to your needs. Adopting the correct constructs in the right scenarios can immensely improve the performance of your application. Some examples are: +All applications need to be tuned to get the best performance. YugabyteDB supports various constructs and [multiple settings](../transactions-performance-ysql/) that can be adopted and tuned to your needs. Adopting the correct constructs in the right scenarios can immensely improve the performance of your application. Some examples are: - Convert a multi-statement transaction affecting a single row into a [fast-path](../transactions-performance-ysql/#fast-single-row-transactions) transaction. - [Avoid long waits](../transactions-performance-ysql/#avoid-long-waits) with the right timeouts. - [Minimize conflict errors](../transactions-performance-ysql/#minimize-conflict-errors) with `ON CONFLICT` clause. -- [Uninterrupted long scans](../transactions-performance-ysql/#long-scans-and-batch-jobs) +- [Uninterrupted long scans](../transactions-performance-ysql/#large-scans-and-batch-jobs) - [Minimize round trips](../transactions-performance-ysql/#stored-procedures-minimize-round-trips) with stored procedures. {{}} @@ -181,7 +181,7 @@ These settings impact all transactions in the current session only. ##### default_transaction_read_only -Turn this setting `ON/TRUE/1` to make all the transactions in the current session read-only. This is helpful when you want to run reports or set up [follower reads](../transactions-performance-ysql/#read-from-followers). +Turn this setting `ON/TRUE/1` to make all the transactions in the current session read-only. This is helpful when you want to run reports or set up [follower reads](../transactions-global-apps/#read-from-followers). ```plpgsql SET default_transaction_read_only = TRUE; diff --git a/docs/content/preview/develop/learn/transactions/transactions-performance-ysql.md b/docs/content/preview/develop/learn/transactions/transactions-performance-ysql.md index 2b97f2c4c2a9..abcfddc51e23 100644 --- a/docs/content/preview/develop/learn/transactions/transactions-performance-ysql.md +++ b/docs/content/preview/develop/learn/transactions/transactions-performance-ysql.md @@ -63,6 +63,18 @@ INSERT INTO txndemo VALUES (1,10) Now, the server automatically updates the row when it fails to insert. Again, this results in one less round trip between the application and the server. +## Avoid long waits + +In the [READ COMMITTED](../../../../architecture/transactions/read-committed/) isolation level, clients do not need to retry or handle serialization errors. During conflicts, the server retries indefinitely based on the [retry options](../../../../architecture/transactions/read-committed/#performance-tuning) and [Wait-On-Conflict](../../../../architecture/transactions/concurrency-control/#wait-on-conflict) policy. + +To avoid getting stuck in a wait loop because of starvation, you should use a reasonable timeout for the statements, similar to the following: + +```plpgsql +SET statement_timeout = '10s'; +``` + +This ensures that the transaction would not be blocked for more than 10 seconds. + ## Handle idle applications When an application takes a long time between two statements in a transaction or just hangs, it could be holding the locks on the [provisional records](../../../../architecture/transactions/distributed-txns/#provisional-records) during that period. It would hit a timeout if the `idle_in_transaction_session_timeout` is set accordingly. After that timeout is reached, the connection is disconnected and the client would have to reconnect. The typical error message would be: diff --git a/docs/content/preview/develop/multi-cloud/multicloud-migration.md b/docs/content/preview/develop/multi-cloud/multicloud-migration.md index 93a8a69d5726..adaeecd30b0a 100644 --- a/docs/content/preview/develop/multi-cloud/multicloud-migration.md +++ b/docs/content/preview/develop/multi-cloud/multicloud-migration.md @@ -58,7 +58,7 @@ When finished, note down the universe-uuids of the `source` and `target` univers After the GCP universe has been set up, you need to populate the data from your AWS universe. This is typically referred to as **Bootstrapping**. {{}} -For detailed instructions, see [Bootstrap a target universe](../../../deploy/multi-dc/async-replication/#bootstrap-a-target-universe). +For detailed instructions, see [Bootstrap a target universe](../../../deploy/multi-dc/async-replication/async-deployment/#bootstrap-a-target-universe). {{}} The basic flow of bootstrapping is as follows: @@ -79,7 +79,7 @@ This ensures that the current data in your AWS universe is correctly copied over After your data has been pre-populated from the AWS universe to the GCP universe, you need to set up the replication stream from the AWS to the GCP universe. {{}} -For detailed instructions on how to set up replication, see [Set up unidirectional replication](../../../deploy/multi-dc/async-replication/#set-up-unidirectional-replication). +For detailed instructions on how to set up replication, see [Set up unidirectional replication](../../../deploy/multi-dc/async-replication/async-deployment/#set-up-unidirectional-replication). {{}} A simple way to set up replication is as follows: @@ -99,7 +99,7 @@ Any data changes to the AWS universe are automatically applied to the GCP univer After the new universe has caught up with the data from the old universe, you can switch over to the new universe. {{}} -For detailed instructions on how to do planned switchover, see [Planned switchover](../../../deploy/multi-dc/async-replication-transactional/#switchover-planned-failover). +For detailed instructions on how to do planned switchover, see [Planned switchover](../../../deploy/multi-dc/async-replication/async-transactional-switchover/#switchover-planned-failover). {{}} The basic flow of switchover is as follows: diff --git a/docs/content/preview/develop/realworld-apps/ecommerce-app.md b/docs/content/preview/develop/realworld-apps/ecommerce-app.md index afd2eca454f9..8df679bd52dd 100644 --- a/docs/content/preview/develop/realworld-apps/ecommerce-app.md +++ b/docs/content/preview/develop/realworld-apps/ecommerce-app.md @@ -73,7 +73,7 @@ The sections below describe the architecture / data model for the various featur ### Product catalog management -The inventory of products is modeled as a table using the Cassandra-compatible YCQL API. Each product has a unique `id` which is an integer in this example. The product `id` is the [primary key partition column](../../learn/data-modeling/#partition-key-columns-required). This ensures that all the data for one product (identified by its product `id`) is colocated in the database. +The inventory of products is modeled as a table using the Cassandra-compatible YCQL API. Each product has a unique `id` which is an integer in this example. The product `id` is the [primary key partition column](../../learn/data-modeling-ycql/#partition-key-columns-required). This ensures that all the data for one product (identified by its product `id`) is colocated in the database. ```sql ycqlsh> DESCRIBE TABLE yugastore.products; diff --git a/docs/content/preview/yugabyte-platform/create-deployments/create-universe-multi-zone-kubernetes.md b/docs/content/preview/yugabyte-platform/create-deployments/create-universe-multi-zone-kubernetes.md index 6dd4a79adc75..598968b07827 100644 --- a/docs/content/preview/yugabyte-platform/create-deployments/create-universe-multi-zone-kubernetes.md +++ b/docs/content/preview/yugabyte-platform/create-deployments/create-universe-multi-zone-kubernetes.md @@ -33,7 +33,7 @@ YugabyteDB Anywhere allows you to create a universe in one geographic region acr Before you start creating a universe, ensure that you performed steps described in [Configure the Kubernetes cloud provider](/preview/yugabyte-platform/configure-yugabyte-platform/set-up-cloud-provider/kubernetes/). The following illustration shows the **Managed Kubernetes Service configs** list that you should be able to see if you use YugabyteDB Anywhere to navigate to **Configs > Cloud Provider Configuration > Infrastructure > Managed Kubernetes Service**: -![img](/images/yb-platform/kubernetes-config1.png) +![Managed Kubernetes Service](/images/yb-platform/kubernetes-config1.png) Note that the cloud provider example used in this document has a cluster-level admin access. @@ -53,9 +53,9 @@ Complete the rest of the **Cloud Configuration** section as follows: - Provide the value in the **Pods** field. This value should be equal to or greater than the replication factor. The default value is 3. When this value is supplied, the pods (also known as nodes) are automatically placed across all the availability zones to guarantee the maximum availability. -- In the **Replication Factor** field, define the replication factor, as per the following illustration:
+- In the **Replication Factor** field, define the replication factor, as per the following illustration: - ![img](/images/yb-platform/kubernetes-config55.png) + ![Replication Factor field](/images/yb-platform/kubernetes-config55.png) ### Configure instance @@ -103,7 +103,7 @@ Optionally, use the **Helm Overrides** section, as follows: - Click **Add Kubernetes Overrides** to open the **Kubernetes Overrides** dialog shown in the following illustration: - ![img](/images/yb-platform/kubernetes-config66.png) + ![Kubernetes Overrides](/images/yb-platform/kubernetes-config66.png) - Using the YAML format, which is sensitive to spacing and indentation, specify the universe-level overrides for YB-Master and YB-TServer, as per the following example: @@ -134,7 +134,7 @@ The final step is to click **Create** and wait for the YugabyteDB cluster to app The following illustration shows the universe in its pending state: -![img](/images/yb-platform/kubernetes-config10.png) +![Pending universe](/images/yb-platform/kubernetes-config10.png) ## Examine the universe and connect to nodes @@ -142,13 +142,13 @@ The universe view consists of several tabs that provide different information ab The following illustration shows the **Overview** tab of a newly-created universe: -![img](/images/yb-platform/kubernetes-config11.png) +![Overview](/images/yb-platform/kubernetes-config11.png) If you have defined Helm overrides for your universe, you can modify them at any time through **Overview** by clicking **Actions > Edit Kubernetes Overrides**. The following illustration shows the **Nodes** tab that allows you to see a list of nodes with their addresses: -![img](/images/yb-platform/kubernetes-config12.png) +![Nodes](/images/yb-platform/kubernetes-config12.png) You can create a connection to a node as follows: diff --git a/docs/content/stable/develop/learn/transactions/acid-transactions-ysql.md b/docs/content/stable/develop/learn/transactions/acid-transactions-ysql.md index 0ed944ca156d..7e53468c0e4c 100644 --- a/docs/content/stable/develop/learn/transactions/acid-transactions-ysql.md +++ b/docs/content/stable/develop/learn/transactions/acid-transactions-ysql.md @@ -145,7 +145,7 @@ All applications need to be tuned to get the best performance. YugabyteDB suppor - Convert a multi-statement transaction affecting a single row into a [fast-path](../transactions-performance-ysql/#fast-single-row-transactions) transaction. - [Avoid long waits](../transactions-performance-ysql/#avoid-long-waits) with the right timeouts. - [Minimize conflict errors](../transactions-performance-ysql/#minimize-conflict-errors) with `ON CONFLICT` clause. -- [Uninterrupted long scans](../transactions-performance-ysql/#long-scans-and-batch-jobs) +- [Uninterrupted long scans](../transactions-performance-ysql/#large-scans-and-batch-jobs) - [Minimize round trips](../transactions-performance-ysql/#stored-procedures-minimize-round-trips) with stored procedures. {{}} @@ -178,7 +178,7 @@ These settings impact all transactions in the current session only. ##### default_transaction_read_only -Turn this setting `ON/TRUE/1` to make all the transactions in the current session read-only. This is helpful when you want to run reports or set up [follower reads](../transactions-performance-ysql/#read-from-followers). +Turn this setting `ON/TRUE/1` to make all the transactions in the current session read-only. This is helpful when you want to run reports or set up [follower reads](../transactions-global-apps/#read-from-followers). ```plpgsql SET default_transaction_read_only = TRUE;