Skip to content

Commit

Permalink
[docs] Fix a few typos (#11808)
Browse files Browse the repository at this point in the history
  • Loading branch information
tverona1 authored Mar 21, 2022
1 parent fe4e945 commit b878c88
Show file tree
Hide file tree
Showing 16 changed files with 20 additions and 20 deletions.
2 changes: 1 addition & 1 deletion docs/content/latest/architecture/concepts/yb-master.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Note that the YB-Master is highly available as it forms a Raft group with its pe

### Coordination of universe-wide admin operations

Examples of such operations are user-issued `CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` requests, as well as a creating a backup of a table. The YB-Master performs these operations with a guarantee that the operation is propagated to all tablets irrespective of the state of the YB-TServers hosting these tablets. This is essential because a YB-TServer failure while one of these universe-wide operations is in progress cannot affect the outcome of the operation by failing to apply it on some tablets.
Examples of such operations are user-issued `CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` requests, as well as creating a backup of a table. The YB-Master performs these operations with a guarantee that the operation is propagated to all tablets irrespective of the state of the YB-TServers hosting these tablets. This is essential because a YB-TServer failure while one of these universe-wide operations is in progress cannot affect the outcome of the operation by failing to apply it on some tablets.

### Storage of system metadata

Expand Down
2 changes: 1 addition & 1 deletion docs/content/latest/architecture/concepts/yb-tserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Below is a pictorial illustration of this in the case of a 4-node YugabyteDB uni

![tserver_overview](/images/architecture/tserver_overview.png)

The tablet-peers corresponding to each tablet hosted on different YB-TServers form a Raft group and replicate data between each other. The system shown above comprises of 16 independent Raft groups. The details of this replication are covered in a previous section on replication.
The tablet-peers corresponding to each tablet hosted on different YB-TServers form a Raft group and replicate data between each other. The system shown above comprises of 16 independent Raft groups. The details of this replication are covered in another section on replication.

Within each YB-TServer, there is a lot of cross-tablet intelligence built in to maximize resource efficiency. Below are just some of the ways the YB-TServer coordinates operations across tablets hosted by it:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,14 +113,14 @@ Since 2DC replication is done asynchronously and by replicating the WAL (and the

#### Kubernetes
- Technically replication can be setup with kubernetes deployed universes. However, the source and sink must be able to communicate by directly referencing the pods in the other universe. In practice, this either means that the two universes must be part of the same kubernetes cluster, or that two kubernetes clusters must have DNS and routing properly setup amongst themselves.
- Being able to have two YugabyteDB clusters, each in their own standalone kubernetes cluster, talking to eachother via a LoadBalancer, is not yet supported [#2422](https://github.com/yugabyte/yugabyte-db/issues/2422).
- Being able to have two YugabyteDB clusters, each in their own standalone kubernetes cluster, talking to each other via a LoadBalancer, is not yet supported [#2422](https://github.com/yugabyte/yugabyte-db/issues/2422).

### Cross-feature interactions

#### Supported
- TLS is supported for both client and internal RPC traffic. Universes can also be configured with different certificates.
- RPC compression is supported. Note, both clusters must be on a version that supports compression, before a compression algorithm is turned on.
- Encryption at rest is supported. Note, the clusters can technically use different KMS configurations. However, for bootstrapping a sink cluster, we rely on the backup/restore flow. As such, we inherit a limiation from that, which requires that the universe being restored has at least access to the same KMS as the one in which the backup was taken. This means both the source and the sink must have access to the same KMS configurations.
- Encryption at rest is supported. Note, the clusters can technically use different KMS configurations. However, for bootstrapping a sink cluster, we rely on the backup/restore flow. As such, we inherit a limitation from that, which requires that the universe being restored has at least access to the same KMS as the one in which the backup was taken. This means both the source and the sink must have access to the same KMS configurations.
- YSQL colocation is supported.
- YSQL geo-partitioning is supported. Note, you must configure replication on all new partitions manually, as we do not replicate DDL changes automatically.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ TxnId, HybridTime -> primary provisional record key

This mapping allows us to find all provisional RocksDB records belonging to a particular
transaction. This is used when cleaning up committed or aborted transactions. Note that
because multiple RocksDB key-value pairs belonging to primary provisional records can we written
because multiple RocksDB key-value pairs belonging to primary provisional records can be written
for the same transaction with the same hybrid timestamp, we need to use an increasing counter
(which we call a *write ID*) at the end of the encoded representation of hybrid time in order to
obtain unique RocksDB keys for this reverse index. This write ID is shown as `.0`, `.1`, etc. in
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This section explains how explicit locking works in YugabyteDB. The transactions
The two primary mechanisms to achieve concurrency control are *optimistic* and *pessimistic*. Concurrency control in YugabyteDB can accommodate both of these depending on the scenario.


DocDB exposes the ability to write [provisional records]() which is exercised by the query layer. Provisional records are used to order persist locks on rows in order to detect conflicts. Provisional records have a *priority* assosciated with them, which is a number. When two transactions conflict, the transaction with the lower priority is aborted.
DocDB exposes the ability to write [provisional records]() which is exercised by the query layer. Provisional records are used to order persist locks on rows in order to detect conflicts. Provisional records have a *priority* associated with them, which is a number. When two transactions conflict, the transaction with the lower priority is aborted.

### Optimistic concurrency control

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ maximum of:
* Last committed Raft entry's hybrid time
* One of:
* If there are uncommitted entries in the Raft log: the minimum ofthe first uncommitted entry's
hybrid time - ε (where ε is the smallest possibledifference in hybrid time)
hybrid time - ε (where ε is the smallest possible difference in hybrid time)
and **replicated_ht_lease_exp**.
* If there are no uncommitted entries in the Raft log: the minimum of the current hybrid time and **replicated_ht_lease_exp**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -158,10 +158,10 @@ Create and populate a table, get a timestamp to which you'll restore, and then w
(5 rows)
```
1. Restore the snapshot schedule to the timestamp you obtained before you deleted the data, at a terminal prompt:
1. Restore the snapshot schedule to the timestamp you obtained before you added the data, at a terminal prompt:
```sh
$ ./bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418801439626
$ ./bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418817729963
```
```output
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Create and populate a table, look at a timestamp to which you'll restore, and th
1. Restore the snapshot schedule to the timestamp you obtained before you added the data, at a terminal prompt.

```sh
$ bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1617670679185100
$ bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418817729963
```

```output
Expand Down
2 changes: 1 addition & 1 deletion docs/content/stable/architecture/concepts/yb-master.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Note that the YB-Master is highly available as it forms a Raft group with its pe

### Coordination of universe-wide admin operations

Examples of such operations are user-issued `CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` requests, as well as a creating a backup of a table. The YB-Master performs these operations with a guarantee that the operation is propagated to all tablets irrespective of the state of the YB-TServers hosting these tablets. This is essential because a YB-TServer failure while one of these universe-wide operations is in progress cannot affect the outcome of the operation by failing to apply it on some tablets.
Examples of such operations are user-issued `CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` requests, as well as creating a backup of a table. The YB-Master performs these operations with a guarantee that the operation is propagated to all tablets irrespective of the state of the YB-TServers hosting these tablets. This is essential because a YB-TServer failure while one of these universe-wide operations is in progress cannot affect the outcome of the operation by failing to apply it on some tablets.

### Storage of system metadata

Expand Down
2 changes: 1 addition & 1 deletion docs/content/stable/architecture/concepts/yb-tserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Below is a pictorial illustration of this in the case of a 4-node YugabyteDB uni

![tserver_overview](/images/architecture/tserver_overview.png)

The tablet-peers corresponding to each tablet hosted on different YB-TServers form a Raft group and replicate data between each other. The system shown above comprises of 16 independent Raft groups. The details of this replication are covered in a previous section on replication.
The tablet-peers corresponding to each tablet hosted on different YB-TServers form a Raft group and replicate data between each other. The system shown above comprises of 16 independent Raft groups. The details of this replication are covered in another section on replication.

Within each YB-TServer, there is a lot of cross-tablet intelligence built in to maximize resource efficiency. Below are just some of the ways the YB-TServer coordinates operations across tablets hosted by it:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,14 +110,14 @@ Since 2DC replication is done asynchronously and by replicating the WAL (and the

#### Kubernetes
- Technically replication can be setup with kubernetes deployed universes. However, the source and sink must be able to communicate by directly referencing the pods in the other universe. In practice, this either means that the two universes must be part of the same kubernetes cluster, or that two kubernetes clusters must have DNS and routing properly setup amongst themselves.
- Being able to have two YugabyteDB clusters, each in their own standalone kubernetes cluster, talking to eachother via a LoadBalancer, is not yet supported [#2422](https://github.com/yugabyte/yugabyte-db/issues/2422).
- Being able to have two YugabyteDB clusters, each in their own standalone kubernetes cluster, talking to each other via a LoadBalancer, is not yet supported [#2422](https://github.com/yugabyte/yugabyte-db/issues/2422).

### Cross-feature interactions

#### Supported
- TLS is supported for both client and internal RPC traffic. Universes can also be configured with different certificates.
- RPC compression is supported. Note, both clusters must be on a version that supports compression, before a compression algorithm is turned on.
- Encryption at rest is supported. Note, the clusters can technically use different KMS configurations. However, for bootstrapping a sink cluster, we rely on the backup/restore flow. As such, we inherit a limiation from that, which requires that the universe being restored has at least access to the same KMS as the one in which the backup was taken. This means both the source and the sink must have access to the same KMS configurations.
- Encryption at rest is supported. Note, the clusters can technically use different KMS configurations. However, for bootstrapping a sink cluster, we rely on the backup/restore flow. As such, we inherit a limitation from that, which requires that the universe being restored has at least access to the same KMS as the one in which the backup was taken. This means both the source and the sink must have access to the same KMS configurations.
- YSQL colocation is supported.
- YSQL geo-partitioning is supported. Note, you must configure replication on all new partitions manually, as we do not replicate DDL changes automatically.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ TxnId, HybridTime -> primary provisional record key

This mapping allows us to find all provisional RocksDB records belonging to a particular
transaction. This is used when cleaning up committed or aborted transactions. Note that
because multiple RocksDB key-value pairs belonging to primary provisional records can we written
because multiple RocksDB key-value pairs belonging to primary provisional records can be written
for the same transaction with the same hybrid timestamp, we need to use an increasing counter
(which we call a *write ID*) at the end of the encoded representation of hybrid time in order to
obtain unique RocksDB keys for this reverse index. This write ID is shown as `.0`, `.1`, etc. in
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ maximum of:
* Last committed Raft entry's hybrid time
* One of:
* If there are uncommitted entries in the Raft log: the minimum ofthe first uncommitted entry's
hybrid time - ε (where ε is the smallest possibledifference in hybrid time)
hybrid time - ε (where ε is the smallest possible difference in hybrid time)
and **replicated_ht_lease_exp**.
* If there are no uncommitted entries in the Raft log: the minimum of the current hybrid time and **replicated_ht_lease_exp**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -154,10 +154,10 @@ Create and populate a table, get a timestamp to which you'll restore, and then w
(5 rows)
```
1. Restore the snapshot schedule to the timestamp you obtained before you deleted the data, at a terminal prompt:
1. Restore the snapshot schedule to the timestamp you obtained before you added the data, at a terminal prompt:
```sh
$ ./bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418801439626
$ ./bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418817729963
```
```output
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ Create and populate a table, look at a timestamp to which you'll restore, and th
1. Restore the snapshot schedule to the timestamp you obtained before you added the data, at a terminal prompt.

```sh
$ bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1617670679185100
$ bin/yb-admin restore_snapshot_schedule 0e4ceb83-fe3d-43da-83c3-013a8ef592ca 1620418817729963
```

```output
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This section explains how explicit locking works in YugabyteDB. The transactions
The two primary mechanisms to achieve concurrency control are *optimistic* and *pessimistic*. Concurrency control in YugabyteDB can accomodate both of these depending on the scenario.


DocDB exposes the ability to write [provisional records]() which is exercised by the query layer. Provisional records are used to order persist locks on rows in order to detect conflicts. Provisional records have a *priority* assosciated with them, which is a number. When two transactions conflict, the transaction with the lower priority is aborted.
DocDB exposes the ability to write [provisional records]() which is exercised by the query layer. Provisional records are used to order persist locks on rows in order to detect conflicts. Provisional records have a *priority* associated with them, which is a number. When two transactions conflict, the transaction with the lower priority is aborted.

### Optimistic concurrency control

Expand Down

0 comments on commit b878c88

Please sign in to comment.