diff --git a/docs/content/preview/faq/comparisons/amazon-aurora.md b/docs/content/preview/faq/comparisons/amazon-aurora.md index 444c854091a1..9437cac23d4a 100644 --- a/docs/content/preview/faq/comparisons/amazon-aurora.md +++ b/docs/content/preview/faq/comparisons/amazon-aurora.md @@ -13,7 +13,7 @@ menu: type: docs --- -Generally available since 2015, Amazon Aurora is built on a proprietary distributed storage engine that automatically replicates 6 copies of data across 3 availability zones for high availability. From an API standpoint, Aurora is wire compatible with both PostgreSQL and MySQL. As described in ["Amazon Aurora under the hood: quorums and correlated failure"](https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure/), Aurora uses a quorum write approach based on 6 replicas. This allows for significantly better availability and durability than traditional master-slave replication. +Generally available since 2015, Amazon Aurora is built on a proprietary distributed storage engine that automatically replicates 6 copies of data across 3 availability zones for high availability. From an API standpoint, Aurora is wire compatible with both PostgreSQL and MySQL. As described in ["Amazon Aurora under the hood: quorums and correlated failure"](https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure/), Aurora uses a quorum write approach based on 6 replicas. This allows for significantly better availability and durability than traditional leader-follower replication. ## Horizontal write scalability diff --git a/docs/content/preview/faq/comparisons/postgresql.md b/docs/content/preview/faq/comparisons/postgresql.md index 40d3b6ecdf73..58df87557f7f 100644 --- a/docs/content/preview/faq/comparisons/postgresql.md +++ b/docs/content/preview/faq/comparisons/postgresql.md @@ -21,7 +21,7 @@ There is a concept of "partitioned tables" in PostgreSQL that can make sharding ## Continuous availability -The most common replication mechanism in PostgreSQL is that of asynchronous replication. Two completely independent database instances are deployed in a master-slave configuration in such a way that the slave instances periodically receive committed data from the master instance. The slave instance does not participate in the original writes to the master, thus making the latency of write operations low from an application client standpoint. However, the true cost is loss of availability (until manual failover to slave) as well as inability to serve recently committed data when the master instance fails (given the data lag on the slave). The less common mechanism of synchronous replication involves committing to two independent instances simultaneously. It is less common because of the complete loss of availability when one of the instances fail. Thus, irrespective of the replication mechanism used, it is impossible to guarantee always-on, strongly-consistent reads in PostgreSQL. +The most common replication mechanism in PostgreSQL is that of asynchronous replication. Two completely independent database instances are deployed in a leader-follower configuration in such a way that the follower instances periodically receive committed data from the master instance. The follower instance does not participate in the original writes to the master, thus making the latency of write operations low from an application client standpoint. However, the true cost is loss of availability (until manual failover to follower) as well as inability to serve recently committed data when the master instance fails (given the data lag on the follower). The less common mechanism of synchronous replication involves committing to two independent instances simultaneously. It is less common because of the complete loss of availability when one of the instances fail. Thus, irrespective of the replication mechanism used, it is impossible to guarantee always-on, strongly-consistent reads in PostgreSQL. YugabyteDB is designed to solve the high availability need that monolithic databases such as PostgreSQL were never designed for. This inherently means committing the updates at 1 more independent failure domain than compared to PostgreSQL. There is no overall "leader" node in YugabyteDB that is responsible for handing updates for all the data in the database. There are multiple shards and those shards are distributed among the multiple nodes in the cluster. Each node has some shard leaders and some shard followers. Serving writes is the responsibility of a shard leader which then uses Raft replication protocol to commit the write to at least 1 more follower replica before acknowledging the write as successful back to the application client. When a node fails, some shard leaders will be lost but the remaining two follower replicas (on still available nodes) will elect a new leader automatically in a few seconds. Note that the replica that had the latest data gets the priority in such an election. This leads to extremely low write unavailability and essentially a self-healing system with auto-failover characteristics. diff --git a/docs/content/preview/faq/comparisons/vitess.md b/docs/content/preview/faq/comparisons/vitess.md index 4960e9f90638..d01cffd7a76c 100644 --- a/docs/content/preview/faq/comparisons/vitess.md +++ b/docs/content/preview/faq/comparisons/vitess.md @@ -21,4 +21,4 @@ While Vitess presents a single logical SQL database to clients, it does not supp ## Lack of continuous availability -Vitess does not make any enhancements to the asynchronous master-slave replication architecture of MySQL. For every shard in the Vitess cluster, another slave instance has to be created and replication has to be maintained. The end result is that Vitess cannot guarantee continuous availability during failures. Spanner-inspired distributed SQL databases like YugabyteDB solve this replication problem at the core using Raft distributed consensus at a per-shard level for both data replication and leader election. +Vitess does not make any enhancements to the asynchronous leader-follower replication architecture of MySQL. For every shard in the Vitess cluster, another follower instance has to be created and replication has to be maintained. The end result is that Vitess cannot guarantee continuous availability during failures. Spanner-inspired distributed SQL databases like YugabyteDB solve this replication problem at the core using Raft distributed consensus at a per-shard level for both data replication and leader election. diff --git a/docs/content/v2.14/architecture/docdb-replication/_index.md b/docs/content/v2.14/architecture/docdb-replication/_index.md index c0c50a308710..607099943eb4 100644 --- a/docs/content/v2.14/architecture/docdb-replication/_index.md +++ b/docs/content/v2.14/architecture/docdb-replication/_index.md @@ -23,7 +23,7 @@ This section describes how replication works in DocDB. The data in a DocDB table There are other advanced replication features in YugabyteDB. These include two forms of asynchronous replication of data: -* **xCluster replication** Data is asynchronously replicated between different YugabyteDB clusters - both unidirectional replication (master-slave) or bidirectional replication across two clusters. +* **xCluster replication** Data is asynchronously replicated between different YugabyteDB clusters - both unidirectional replication (leader-follower) or bidirectional replication across two clusters. * **Read replicas** The in-cluster asynchronous replicas are called read replicas.
diff --git a/docs/content/v2.18/architecture/docdb-replication/_index.md b/docs/content/v2.18/architecture/docdb-replication/_index.md index 127dd9bf691a..bf8a183b7b65 100644 --- a/docs/content/v2.18/architecture/docdb-replication/_index.md +++ b/docs/content/v2.18/architecture/docdb-replication/_index.md @@ -17,7 +17,7 @@ This section describes how replication works in DocDB. The data in a DocDB table YugabyteDB also provides other advanced replication features. These include two forms of asynchronous replication of data: -* **xCluster** Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (master-slave) or bidirectional replication across two universes. +* **xCluster** Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (leader-follower) or bidirectional replication across two universes. * **Read replicas** The in-universe asynchronous replicas are called read replicas. The YugabyteDB synchronous replication architecture is inspired by Google Spanner. diff --git a/docs/content/v2.20/architecture/docdb-replication/_index.md b/docs/content/v2.20/architecture/docdb-replication/_index.md index 93fb25750a98..27421a8b1a1e 100644 --- a/docs/content/v2.20/architecture/docdb-replication/_index.md +++ b/docs/content/v2.20/architecture/docdb-replication/_index.md @@ -17,7 +17,7 @@ This section describes how replication works in DocDB. The data in a DocDB table YugabyteDB also provides other advanced replication features. These include two forms of asynchronous replication of data: -* **xCluster** - Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (master-slave) or bidirectional replication across two universes. +* **xCluster** - Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (leader-follower) or bidirectional replication across two universes. * **Read replicas** - The in-universe asynchronous replicas are called read replicas. The YugabyteDB synchronous replication architecture is inspired by Google Spanner.