From 78dacf8d37b78b63f69d3a3f8ecad77c0bccbab1 Mon Sep 17 00:00:00 2001 From: taroface Date: Tue, 22 Oct 2019 19:12:35 -0400 Subject: [PATCH 1/6] refresh AWS deployment doc + relevant includes tweaks to AWS deployment docs --- .../prod-deployment/insecure-scale-cluster.md | 8 +- .../prod-deployment/insecure-test-cluster.md | 15 +- .../insecure-test-load-balancing.md | 2 +- .../secure-generate-certificates.md | 153 +++++++++++------- .../prod-deployment/secure-scale-cluster.md | 5 +- .../prod-deployment/secure-test-cluster.md | 15 +- .../secure-test-load-balancing.md | 2 +- .../prod-deployment/synchronize-clocks.md | 7 +- .../prod-deployment/insecure-scale-cluster.md | 8 +- .../prod-deployment/insecure-test-cluster.md | 59 +++---- .../insecure-test-load-balancing.md | 2 +- .../secure-generate-certificates.md | 55 +++++-- .../prod-deployment/secure-scale-cluster.md | 5 +- .../prod-deployment/secure-test-cluster.md | 15 +- .../secure-test-load-balancing.md | 2 +- .../prod-deployment/synchronize-clocks.md | 7 +- v19.1/deploy-cockroachdb-on-aws-insecure.md | 79 ++++++--- v19.1/deploy-cockroachdb-on-aws.md | 79 ++++++--- v19.2/deploy-cockroachdb-on-aws-insecure.md | 81 +++++++--- v19.2/deploy-cockroachdb-on-aws.md | 79 ++++++--- 20 files changed, 418 insertions(+), 260 deletions(-) diff --git a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md index d36f5188b3c..1f9dbb98453 100644 --- a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md @@ -29,16 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell - $ cockroach start --insecure \ + $ cockroach start \ + --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.1/prod-deployment/insecure-test-cluster.md b/_includes/v19.1/prod-deployment/insecure-test-cluster.md index 307b8f999b9..a4e32f2efb3 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-test-cluster.md @@ -6,7 +6,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --insecure --host=
+ $ cockroach sql --insecure --host=
~~~ 2. Create an `insecurenodetest` database: @@ -16,16 +16,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE insecurenodetest; ~~~ -3. Use `\q` or `ctrl-d` to exit the SQL shell. - -4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -5. View the cluster's databases, which will include `insecurenodetest`: +3. View the cluster's databases, which will include `insecurenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +36,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. diff --git a/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md b/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md index e4369b54410..9e594e0a864 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md +++ b/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=disable" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.1/prod-deployment/secure-generate-certificates.md b/_includes/v19.1/prod-deployment/secure-generate-certificates.md index abbd4a331eb..19e6b084f2e 100644 --- a/_includes/v19.1/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.1/prod-deployment/secure-generate-certificates.md @@ -27,44 +27,61 @@ Locally, you'll need to [create the following certificates and keys](create-secu 3. Create the CA certificate and key: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-ca \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 5. Upload the CA certificate and node certificate and key to the first node: + {% if page.title contains "AWS" %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh -i /path/.pem @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp -i /path/.pem \ + certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 6. Delete the local copy of the node certificate and key: @@ -78,48 +95,64 @@ Locally, you'll need to [create the following certificates and keys](create-secu 7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 8. Upload the CA certificate and node certificate and key to the second node: + {% if page.title contains "AWS" %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh -i /path/.pem @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp -i /path/.pem \ + certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 9. Repeat steps 6 - 8 for each additional node. 10. Create a client certificate and key for the `root` user: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-client \ + root \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: diff --git a/_includes/v19.1/prod-deployment/secure-scale-cluster.md b/_includes/v19.1/prod-deployment/secure-scale-cluster.md index 31b04229f77..ceaebb5df43 100644 --- a/_includes/v19.1/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-scale-cluster.md @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.1/prod-deployment/secure-test-cluster.md b/_includes/v19.1/prod-deployment/secure-test-cluster.md index ba8b3370bb1..00a35dbbdcc 100644 --- a/_includes/v19.1/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-test-cluster.md @@ -6,7 +6,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --certs-dir=certs --host=
+ $ cockroach sql --certs-dir=certs --host=
~~~ 2. Create a `securenodetest` database: @@ -16,16 +16,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE securenodetest; ~~~ -3. Use `\q` to exit the SQL shell. - -4. Launch the built-in SQL client against a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -5. View the cluster's databases, which will include `securenodetest`: +3. View the cluster's databases, which will include `securenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +36,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/_includes/v19.1/prod-deployment/secure-test-load-balancing.md b/_includes/v19.1/prod-deployment/secure-test-load-balancing.md index 528669db93e..17eed4194a0 100644 --- a/_includes/v19.1/prod-deployment/secure-test-load-balancing.md +++ b/_includes/v19.1/prod-deployment/secure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.1/prod-deployment/synchronize-clocks.md b/_includes/v19.1/prod-deployment/synchronize-clocks.md index d796023946c..f7b2344041d 100644 --- a/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.1/prod-deployment/synchronize-clocks.md @@ -77,15 +77,16 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: -- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). +- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). - If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). {% elsif page.title contains "AWS" %} Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. -- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). -- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second. +- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). + - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. + - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. {% elsif page.title contains "Azure" %} diff --git a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md index d36f5188b3c..1f9dbb98453 100644 --- a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md @@ -29,16 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell - $ cockroach start --insecure \ + $ cockroach start \ + --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.2/prod-deployment/insecure-test-cluster.md b/_includes/v19.2/prod-deployment/insecure-test-cluster.md index 307b8f999b9..553c45ad2bf 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-test-cluster.md @@ -5,44 +5,35 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo 1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ + ~~~ shell + $ cockroach sql --insecure --host=
+ ~~~ 2. Create an `insecurenodetest` database: {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ + ~~~ sql + > CREATE DATABASE insecurenodetest; + ~~~ -3. Use `\q` or `ctrl-d` to exit the SQL shell. - -4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -5. View the cluster's databases, which will include `insecurenodetest`: +3. View the cluster's databases, which will include `insecurenodetest`: {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -6. Use `\q` to exit the SQL shell. + ~~~ sql + > SHOW DATABASES; + ~~~ + + ~~~ + +--------------------+ + | Database | + +--------------------+ + | crdb_internal | + | information_schema | + | insecurenodetest | + | pg_catalog | + | system | + +--------------------+ + (5 rows) + ~~~ + +4. Use `\q` to exit the SQL shell. diff --git a/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md b/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md index e4369b54410..9e594e0a864 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md +++ b/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=disable" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.2/prod-deployment/secure-generate-certificates.md b/_includes/v19.2/prod-deployment/secure-generate-certificates.md index abbd4a331eb..7b2730a807f 100644 --- a/_includes/v19.2/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.2/prod-deployment/secure-generate-certificates.md @@ -53,6 +53,22 @@ Locally, you'll need to [create the following certificates and keys](create-secu 5. Upload the CA certificate and node certificate and key to the first node: + {% if page.title contains "AWS" %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh -i /path/.pem @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp -i /path/.pem \ + certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} {% include copy-clipboard.html %} ~~~ shell $ ssh @ "mkdir certs" @@ -65,6 +81,7 @@ Locally, you'll need to [create the following certificates and keys](create-secu certs/node.key \ @:~/certs ~~~ + {% endif %} 6. Delete the local copy of the node certificate and key: @@ -95,19 +112,35 @@ Locally, you'll need to [create the following certificates and keys](create-secu 8. Upload the CA certificate and node certificate and key to the second node: + {% if page.title contains "AWS" %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh -i /path/.pem @ "mkdir certs" + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ scp -i /path/.pem \ + certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 9. Repeat steps 6 - 8 for each additional node. diff --git a/_includes/v19.2/prod-deployment/secure-scale-cluster.md b/_includes/v19.2/prod-deployment/secure-scale-cluster.md index 31b04229f77..ceaebb5df43 100644 --- a/_includes/v19.2/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-scale-cluster.md @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.2/prod-deployment/secure-test-cluster.md b/_includes/v19.2/prod-deployment/secure-test-cluster.md index ba8b3370bb1..00a35dbbdcc 100644 --- a/_includes/v19.2/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-test-cluster.md @@ -6,7 +6,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --certs-dir=certs --host=
+ $ cockroach sql --certs-dir=certs --host=
~~~ 2. Create a `securenodetest` database: @@ -16,16 +16,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE securenodetest; ~~~ -3. Use `\q` to exit the SQL shell. - -4. Launch the built-in SQL client against a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -5. View the cluster's databases, which will include `securenodetest`: +3. View the cluster's databases, which will include `securenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +36,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/_includes/v19.2/prod-deployment/secure-test-load-balancing.md b/_includes/v19.2/prod-deployment/secure-test-load-balancing.md index 528669db93e..17eed4194a0 100644 --- a/_includes/v19.2/prod-deployment/secure-test-load-balancing.md +++ b/_includes/v19.2/prod-deployment/secure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index d796023946c..2fa32b0bb1d 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -84,9 +84,10 @@ Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), wh Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. -- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). -- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second. - +- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). + - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. + - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. + {% elsif page.title contains "Azure" %} [`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. diff --git a/v19.1/deploy-cockroachdb-on-aws-insecure.md b/v19.1/deploy-cockroachdb-on-aws-insecure.md index 369bda7b97f..d0077cda482 100644 --- a/v19.1/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.1/deploy-cockroachdb-on-aws-insecure.md @@ -23,16 +23,36 @@ This page shows you how to manually deploy an insecure multi-node CockroachDB cl {% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-8-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -41,16 +61,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -61,17 +72,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-8-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI + + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). +You can set your network IP by selecting "My IP" in the Source field. -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +#### Load balancer-health check communication -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The CIDR of your VPC (e.g., 10.12.0.0/16) -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). +To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -87,10 +110,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. +1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: + - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. + - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -112,10 +139,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 9. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 10. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. + {% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} ## Step 11. Use the cluster diff --git a/v19.1/deploy-cockroachdb-on-aws.md b/v19.1/deploy-cockroachdb-on-aws.md index 4bd2410c8cd..e3a566583d9 100644 --- a/v19.1/deploy-cockroachdb-on-aws.md +++ b/v19.1/deploy-cockroachdb-on-aws.md @@ -24,16 +24,38 @@ If you are only testing CockroachDB, or you are not concerned with protecting ne {% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-9-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +- When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -42,16 +64,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -62,17 +75,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-9-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI -- Run at least 3 nodes to ensure survivability. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +You can set your network IP by selecting "My IP" in the Source field. -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. +#### Load balancer-health check communication -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + +To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -88,10 +113,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. +1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: + - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -117,10 +146,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 10. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 11. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. Then [generate and upload a certificate and key](#step-5-generate-certificates) for the new node. + {% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} ## Step 12. Use the database diff --git a/v19.2/deploy-cockroachdb-on-aws-insecure.md b/v19.2/deploy-cockroachdb-on-aws-insecure.md index 21f279b1fb6..7763623494a 100644 --- a/v19.2/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.2/deploy-cockroachdb-on-aws-insecure.md @@ -23,16 +23,36 @@ This page shows you how to manually deploy an insecure multi-node CockroachDB cl {% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-8-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -41,16 +61,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -61,17 +72,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-8-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI + + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). +You can set your network IP by selecting "My IP" in the Source field. -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +#### Load balancer-health check communication -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The CIDR of your VPC (e.g., 10.12.0.0/16) -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). +To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -87,10 +110,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. +1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: + - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. + - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -112,10 +139,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 9. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 10. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. + {% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} ## Step 11. Use the cluster @@ -123,7 +154,7 @@ AWS offers fully-managed load balancing to distribute traffic between instances. Now that your deployment is working, you can: 1. [Implement your data model](sql-statements.html). -2. [Create users](create-user.html) and [grant them privileges](grant.html). +2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). 3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the AWS load balancer, not to a CockroachDB node. ## See also diff --git a/v19.2/deploy-cockroachdb-on-aws.md b/v19.2/deploy-cockroachdb-on-aws.md index 4bd2410c8cd..e3a566583d9 100644 --- a/v19.2/deploy-cockroachdb-on-aws.md +++ b/v19.2/deploy-cockroachdb-on-aws.md @@ -24,16 +24,38 @@ If you are only testing CockroachDB, or you are not concerned with protecting ne {% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-9-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +- When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -42,16 +64,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -62,17 +75,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-9-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI -- Run at least 3 nodes to ensure survivability. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +You can set your network IP by selecting "My IP" in the Source field. -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. +#### Load balancer-health check communication -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + +To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -88,10 +113,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. +1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: + - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -117,10 +146,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 10. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 11. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. Then [generate and upload a certificate and key](#step-5-generate-certificates) for the new node. + {% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} ## Step 12. Use the database From 01092c62f425cb2ef65cd0736f9899b33e75e0a2 Mon Sep 17 00:00:00 2001 From: taroface Date: Wed, 23 Oct 2019 14:35:39 -0400 Subject: [PATCH 2/6] fix 1 clock sync typo --- _includes/v19.2/prod-deployment/synchronize-clocks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index 2fa32b0bb1d..6a4dce3508b 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -77,7 +77,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: -- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). +- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). - If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). {% elsif page.title contains "AWS" %} From 452d68d2344e816fcb57fb1765c856f241b7a0ef Mon Sep 17 00:00:00 2001 From: taroface Date: Mon, 28 Oct 2019 14:04:50 -0400 Subject: [PATCH 3/6] update AWS deployment and clock sync docs --- .../v19.1/faq/clock-synchronization-effects.md | 8 ++++---- .../prod-deployment/insecure-scale-cluster.md | 1 - .../prod-deployment/insecure-test-cluster.md | 8 +++++--- .../v19.1/prod-deployment/secure-test-cluster.md | 8 +++++--- .../v19.1/prod-deployment/synchronize-clocks.md | 13 +++++++++++-- .../v19.2/faq/clock-synchronization-effects.md | 8 ++++---- .../prod-deployment/insecure-scale-cluster.md | 1 - .../prod-deployment/insecure-test-cluster.md | 8 +++++--- .../v19.2/prod-deployment/secure-test-cluster.md | 8 +++++--- .../v19.2/prod-deployment/synchronize-clocks.md | 15 ++++++++++++--- 10 files changed, 51 insertions(+), 27 deletions(-) diff --git a/_includes/v19.1/faq/clock-synchronization-effects.md b/_includes/v19.1/faq/clock-synchronization-effects.md index 4e7ef72b4ab..db8495e764d 100644 --- a/_includes/v19.1/faq/clock-synchronization-effects.md +++ b/_includes/v19.1/faq/clock-synchronization-effects.md @@ -4,16 +4,16 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim ### Considerations -There are important considerations when setting up clock synchronization: +When setting up clock synchronization: - We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead. + Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP. {{site.data.alerts.end}} -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster. +- In a hybrid cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. +- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md index 1f9dbb98453..453b7127810 100644 --- a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md @@ -34,7 +34,6 @@ For each additional node you want to add to the cluster, complete the following {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ - --certs-dir=certs \ --advertise-addr= \ --join=,, \ --cache=.25 \ diff --git a/_includes/v19.1/prod-deployment/insecure-test-cluster.md b/_includes/v19.1/prod-deployment/insecure-test-cluster.md index a4e32f2efb3..4008fd5b0de 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-test-cluster.md @@ -1,8 +1,10 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.1/prod-deployment/secure-test-cluster.md b/_includes/v19.1/prod-deployment/secure-test-cluster.md index 00a35dbbdcc..16f74db7e33 100644 --- a/_includes/v19.1/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-test-cluster.md @@ -1,8 +1,10 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.1/prod-deployment/synchronize-clocks.md b/_includes/v19.1/prod-deployment/synchronize-clocks.md index f7b2344041d..1599d9ae346 100644 --- a/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.1/prod-deployment/synchronize-clocks.md @@ -73,12 +73,16 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod 7. Repeat these steps for each machine where a CockroachDB node will run. -{% elsif page.title contains "Google" %} +{% elsif page.title contains "Google" %} Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, all AWS machines should use the internal [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service), and all other non-GCE machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). + +{{site.data.alerts.callout_info}} +The Google and Amazon services handle "smearing" the leap second in compatible ways. +{{site.data.alerts.end}} {% elsif page.title contains "AWS" %} @@ -87,6 +91,11 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. +- If you plan to run a hybrid cluster across AWS and other cloud providers or environments, all GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and all other non-AWS machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). + +{{site.data.alerts.callout_info}} +The Google and Amazon services handle "smearing" the leap second in compatible ways. +{{site.data.alerts.end}} {% elsif page.title contains "Azure" %} diff --git a/_includes/v19.2/faq/clock-synchronization-effects.md b/_includes/v19.2/faq/clock-synchronization-effects.md index 4e7ef72b4ab..db8495e764d 100644 --- a/_includes/v19.2/faq/clock-synchronization-effects.md +++ b/_includes/v19.2/faq/clock-synchronization-effects.md @@ -4,16 +4,16 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim ### Considerations -There are important considerations when setting up clock synchronization: +When setting up clock synchronization: - We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead. + Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP. {{site.data.alerts.end}} -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster. +- In a hybrid cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. +- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md index 1f9dbb98453..453b7127810 100644 --- a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md @@ -34,7 +34,6 @@ For each additional node you want to add to the cluster, complete the following {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ - --certs-dir=certs \ --advertise-addr= \ --join=,, \ --cache=.25 \ diff --git a/_includes/v19.2/prod-deployment/insecure-test-cluster.md b/_includes/v19.2/prod-deployment/insecure-test-cluster.md index 553c45ad2bf..b4255df5777 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-test-cluster.md @@ -1,8 +1,10 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.2/prod-deployment/secure-test-cluster.md b/_includes/v19.2/prod-deployment/secure-test-cluster.md index 00a35dbbdcc..16f74db7e33 100644 --- a/_includes/v19.2/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-test-cluster.md @@ -1,8 +1,10 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index 6a4dce3508b..1599d9ae346 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -73,12 +73,16 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod 7. Repeat these steps for each machine where a CockroachDB node will run. -{% elsif page.title contains "Google" %} +{% elsif page.title contains "Google" %} Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, all AWS machines should use the internal [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service), and all other non-GCE machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). + +{{site.data.alerts.callout_info}} +The Google and Amazon services handle "smearing" the leap second in compatible ways. +{{site.data.alerts.end}} {% elsif page.title contains "AWS" %} @@ -87,7 +91,12 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. - +- If you plan to run a hybrid cluster across AWS and other cloud providers or environments, all GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and all other non-AWS machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). + +{{site.data.alerts.callout_info}} +The Google and Amazon services handle "smearing" the leap second in compatible ways. +{{site.data.alerts.end}} + {% elsif page.title contains "Azure" %} [`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. From 1b6512796dbaffea1dd2cf78194a44b3c16f9c17 Mon Sep 17 00:00:00 2001 From: taroface Date: Mon, 28 Oct 2019 17:54:30 -0400 Subject: [PATCH 4/6] update AWS and clock sync instructions --- _includes/v19.1/faq/clock-synchronization-effects.md | 7 +------ _includes/v19.1/prod-deployment/insecure-scale-cluster.md | 1 + _includes/v19.2/faq/clock-synchronization-effects.md | 7 +------ _includes/v19.2/prod-deployment/insecure-scale-cluster.md | 1 + 4 files changed, 4 insertions(+), 12 deletions(-) diff --git a/_includes/v19.1/faq/clock-synchronization-effects.md b/_includes/v19.1/faq/clock-synchronization-effects.md index db8495e764d..f803052038c 100644 --- a/_includes/v19.1/faq/clock-synchronization-effects.md +++ b/_includes/v19.1/faq/clock-synchronization-effects.md @@ -7,12 +7,7 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim When setting up clock synchronization: - We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. - - {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP. - {{site.data.alerts.end}} - -- In a hybrid cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. +- In a hybrid GCE/AWS cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). All other machines should use [Google Public NTP](https://developers.google.com/time/). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. - If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. diff --git a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md index 453b7127810..4075903c353 100644 --- a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md @@ -34,6 +34,7 @@ For each additional node you want to add to the cluster, complete the following {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ + --insecure \ --advertise-addr= \ --join=,, \ --cache=.25 \ diff --git a/_includes/v19.2/faq/clock-synchronization-effects.md b/_includes/v19.2/faq/clock-synchronization-effects.md index db8495e764d..f803052038c 100644 --- a/_includes/v19.2/faq/clock-synchronization-effects.md +++ b/_includes/v19.2/faq/clock-synchronization-effects.md @@ -7,12 +7,7 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim When setting up clock synchronization: - We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. - - {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP. - {{site.data.alerts.end}} - -- In a hybrid cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. +- In a hybrid GCE/AWS cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). All other machines should use [Google Public NTP](https://developers.google.com/time/). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. - If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. diff --git a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md index 453b7127810..4075903c353 100644 --- a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md @@ -34,6 +34,7 @@ For each additional node you want to add to the cluster, complete the following {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ + --insecure \ --advertise-addr= \ --join=,, \ --cache=.25 \ From 2ab851ea103522d728d6bc4caf883e37a0acd32b Mon Sep 17 00:00:00 2001 From: taroface Date: Mon, 28 Oct 2019 20:04:57 -0400 Subject: [PATCH 5/6] update AWS and clock sync with Ben's comments --- .../faq/clock-synchronization-effects.md | 6 +- .../prod-deployment/insecure-scale-cluster.md | 2 +- .../prod-deployment/insecure-test-cluster.md | 2 +- .../secure-generate-certificates.md | 17 +-- .../prod-deployment/secure-scale-cluster.md | 2 +- .../prod-deployment/secure-test-cluster.md | 2 +- .../prod-deployment/synchronize-clocks.md | 16 +-- .../faq/clock-synchronization-effects.md | 6 +- .../prod-deployment/insecure-scale-cluster.md | 2 +- .../prod-deployment/insecure-test-cluster.md | 2 +- .../secure-generate-certificates.md | 117 +++++++++--------- .../prod-deployment/secure-scale-cluster.md | 2 +- .../prod-deployment/secure-test-cluster.md | 2 +- .../prod-deployment/synchronize-clocks.md | 16 +-- v19.1/deploy-cockroachdb-on-aws-insecure.md | 6 +- v19.1/deploy-cockroachdb-on-aws.md | 6 +- v19.2/deploy-cockroachdb-on-aws-insecure.md | 6 +- v19.2/deploy-cockroachdb-on-aws.md | 6 +- 18 files changed, 104 insertions(+), 114 deletions(-) diff --git a/_includes/v19.1/faq/clock-synchronization-effects.md b/_includes/v19.1/faq/clock-synchronization-effects.md index f803052038c..d0849e7d3a9 100644 --- a/_includes/v19.1/faq/clock-synchronization-effects.md +++ b/_includes/v19.1/faq/clock-synchronization-effects.md @@ -6,9 +6,9 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim When setting up clock synchronization: -- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. -- In a hybrid GCE/AWS cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). All other machines should use [Google Public NTP](https://developers.google.com/time/). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). +- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. +- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md index 4075903c353..665a8bc6f91 100644 --- a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md @@ -29,7 +29,7 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.1/prod-deployment/insecure-test-cluster.md b/_includes/v19.1/prod-deployment/insecure-test-cluster.md index 4008fd5b0de..faece96fa42 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-test-cluster.md @@ -1,6 +1,6 @@ CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: diff --git a/_includes/v19.1/prod-deployment/secure-generate-certificates.md b/_includes/v19.1/prod-deployment/secure-generate-certificates.md index 19e6b084f2e..41d3ae28beb 100644 --- a/_includes/v19.1/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.1/prod-deployment/secure-generate-certificates.md @@ -56,13 +56,17 @@ Locally, you'll need to [create the following certificates and keys](create-secu {% if page.title contains "AWS" %} {% include copy-clipboard.html %} ~~~ shell - $ ssh -i /path/.pem @ "mkdir certs" + $ ssh-add /path/.pem ~~~ {% include copy-clipboard.html %} ~~~ shell - $ scp -i /path/.pem \ - certs/ca.crt \ + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ certs/node.crt \ certs/node.key \ @:~/certs @@ -115,13 +119,12 @@ Locally, you'll need to [create the following certificates and keys](create-secu {% if page.title contains "AWS" %} {% include copy-clipboard.html %} ~~~ shell - $ ssh -i /path/.pem @ "mkdir certs" + $ ssh @ "mkdir certs" ~~~ {% include copy-clipboard.html %} ~~~ shell - $ scp -i /path/.pem \ - certs/ca.crt \ + $ scp certs/ca.crt \ certs/node.crt \ certs/node.key \ @:~/certs @@ -173,4 +176,4 @@ Locally, you'll need to [create the following certificates and keys](create-secu {{site.data.alerts.callout_info}} On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). -{{site.data.alerts.end}} +{{site.data.alerts.end}} \ No newline at end of file diff --git a/_includes/v19.1/prod-deployment/secure-scale-cluster.md b/_includes/v19.1/prod-deployment/secure-scale-cluster.md index ceaebb5df43..f7d1fcd51f3 100644 --- a/_includes/v19.1/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-scale-cluster.md @@ -29,7 +29,7 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.1/prod-deployment/secure-test-cluster.md b/_includes/v19.1/prod-deployment/secure-test-cluster.md index 16f74db7e33..25af5af0414 100644 --- a/_includes/v19.1/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-test-cluster.md @@ -1,6 +1,6 @@ CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: diff --git a/_includes/v19.1/prod-deployment/synchronize-clocks.md b/_includes/v19.1/prod-deployment/synchronize-clocks.md index 1599d9ae346..775985fc6d4 100644 --- a/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.1/prod-deployment/synchronize-clocks.md @@ -60,7 +60,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -78,11 +78,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, all AWS machines should use the internal [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service), and all other non-GCE machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). - -{{site.data.alerts.callout_info}} -The Google and Amazon services handle "smearing" the leap second in compatible ways. -{{site.data.alerts.end}} +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} @@ -91,11 +87,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across AWS and other cloud providers or environments, all GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and all other non-AWS machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). - -{{site.data.alerts.callout_info}} -The Google and Amazon services handle "smearing" the leap second in compatible ways. -{{site.data.alerts.end}} +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -167,7 +159,7 @@ The Google and Amazon services handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/_includes/v19.2/faq/clock-synchronization-effects.md b/_includes/v19.2/faq/clock-synchronization-effects.md index f803052038c..d0849e7d3a9 100644 --- a/_includes/v19.2/faq/clock-synchronization-effects.md +++ b/_includes/v19.2/faq/clock-synchronization-effects.md @@ -6,9 +6,9 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim When setting up clock synchronization: -- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. -- In a hybrid GCE/AWS cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). All other machines should use [Google Public NTP](https://developers.google.com/time/). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways. -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). +- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. +- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md index 4075903c353..665a8bc6f91 100644 --- a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md @@ -29,7 +29,7 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.2/prod-deployment/insecure-test-cluster.md b/_includes/v19.2/prod-deployment/insecure-test-cluster.md index b4255df5777..83c31569efc 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-test-cluster.md @@ -1,6 +1,6 @@ CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: diff --git a/_includes/v19.2/prod-deployment/secure-generate-certificates.md b/_includes/v19.2/prod-deployment/secure-generate-certificates.md index 7b2730a807f..41d3ae28beb 100644 --- a/_includes/v19.2/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.2/prod-deployment/secure-generate-certificates.md @@ -27,42 +27,46 @@ Locally, you'll need to [create the following certificates and keys](create-secu 3. Create the CA certificate and key: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-ca \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 5. Upload the CA certificate and node certificate and key to the first node: {% if page.title contains "AWS" %} {% include copy-clipboard.html %} ~~~ shell - $ ssh -i /path/.pem @ "mkdir certs" + $ ssh-add /path/.pem + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" ~~~ {% include copy-clipboard.html %} ~~~ shell - $ scp -i /path/.pem \ - certs/ca.crt \ + $ scp certs/ca.crt \ certs/node.crt \ certs/node.key \ @:~/certs @@ -70,17 +74,17 @@ Locally, you'll need to [create the following certificates and keys](create-secu {% else %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ {% endif %} 6. Delete the local copy of the node certificate and key: @@ -95,33 +99,32 @@ Locally, you'll need to [create the following certificates and keys](create-secu 7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 8. Upload the CA certificate and node certificate and key to the second node: {% if page.title contains "AWS" %} {% include copy-clipboard.html %} ~~~ shell - $ ssh -i /path/.pem @ "mkdir certs" + $ ssh @ "mkdir certs" ~~~ {% include copy-clipboard.html %} ~~~ shell - $ scp -i /path/.pem \ - certs/ca.crt \ + $ scp certs/ca.crt \ certs/node.crt \ certs/node.key \ @:~/certs @@ -147,12 +150,12 @@ Locally, you'll need to [create the following certificates and keys](create-secu 10. Create a client certificate and key for the `root` user: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-client \ + root \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: @@ -173,4 +176,4 @@ Locally, you'll need to [create the following certificates and keys](create-secu {{site.data.alerts.callout_info}} On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). -{{site.data.alerts.end}} +{{site.data.alerts.end}} \ No newline at end of file diff --git a/_includes/v19.2/prod-deployment/secure-scale-cluster.md b/_includes/v19.2/prod-deployment/secure-scale-cluster.md index ceaebb5df43..f7d1fcd51f3 100644 --- a/_includes/v19.2/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-scale-cluster.md @@ -29,7 +29,7 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier). +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell diff --git a/_includes/v19.2/prod-deployment/secure-test-cluster.md b/_includes/v19.2/prod-deployment/secure-test-cluster.md index 16f74db7e33..25af5af0414 100644 --- a/_includes/v19.2/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-test-cluster.md @@ -1,6 +1,6 @@ CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes. +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index 1599d9ae346..775985fc6d4 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -60,7 +60,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -78,11 +78,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, all AWS machines should use the internal [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service), and all other non-GCE machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). - -{{site.data.alerts.callout_info}} -The Google and Amazon services handle "smearing" the leap second in compatible ways. -{{site.data.alerts.end}} +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} @@ -91,11 +87,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across AWS and other cloud providers or environments, all GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and all other non-AWS machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). - -{{site.data.alerts.callout_info}} -The Google and Amazon services handle "smearing" the leap second in compatible ways. -{{site.data.alerts.end}} +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -167,7 +159,7 @@ The Google and Amazon services handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/v19.1/deploy-cockroachdb-on-aws-insecure.md b/v19.1/deploy-cockroachdb-on-aws-insecure.md index d0077cda482..1865753c7e8 100644 --- a/v19.1/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.1/deploy-cockroachdb-on-aws-insecure.md @@ -37,7 +37,7 @@ Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch a - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. -- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - Make sure all your instances are in the same security group. @@ -92,9 +92,9 @@ You can set your network IP by selecting "My IP" in the Source field. Type | Custom TCP Rule Protocol | TCP Port Range | **8080** - Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks diff --git a/v19.1/deploy-cockroachdb-on-aws.md b/v19.1/deploy-cockroachdb-on-aws.md index e3a566583d9..3698d568bbd 100644 --- a/v19.1/deploy-cockroachdb-on-aws.md +++ b/v19.1/deploy-cockroachdb-on-aws.md @@ -38,7 +38,7 @@ Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch a - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. -- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - Make sure all your instances are in the same security group. @@ -95,9 +95,9 @@ You can set your network IP by selecting "My IP" in the Source field. Type | Custom TCP Rule Protocol | TCP Port Range | **8080** - Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks diff --git a/v19.2/deploy-cockroachdb-on-aws-insecure.md b/v19.2/deploy-cockroachdb-on-aws-insecure.md index 7763623494a..c2d2cd1db3d 100644 --- a/v19.2/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.2/deploy-cockroachdb-on-aws-insecure.md @@ -37,7 +37,7 @@ Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch a - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. -- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - Make sure all your instances are in the same security group. @@ -92,9 +92,9 @@ You can set your network IP by selecting "My IP" in the Source field. Type | Custom TCP Rule Protocol | TCP Port Range | **8080** - Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks diff --git a/v19.2/deploy-cockroachdb-on-aws.md b/v19.2/deploy-cockroachdb-on-aws.md index e3a566583d9..3698d568bbd 100644 --- a/v19.2/deploy-cockroachdb-on-aws.md +++ b/v19.2/deploy-cockroachdb-on-aws.md @@ -38,7 +38,7 @@ Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch a - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. -- Note the ID of the VPC you select. You will need to look up its CIDR when setting inbound rules for your security group. +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - Make sure all your instances are in the same security group. @@ -95,9 +95,9 @@ You can set your network IP by selecting "My IP" in the Source field. Type | Custom TCP Rule Protocol | TCP Port Range | **8080** - Source | The CIDR of your VPC (e.g., 10.12.0.0/16) + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -To get the CIDR of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks From 2b54ece69bc7de764e6cbf73c1e6215547a580f3 Mon Sep 17 00:00:00 2001 From: taroface Date: Wed, 6 Nov 2019 14:30:23 -0500 Subject: [PATCH 6/6] clock sync and AWS markdown, formatting improvements --- .../v19.1/prod-deployment/synchronize-clocks.md | 12 ++++++++---- .../v19.2/prod-deployment/synchronize-clocks.md | 12 ++++++++---- v19.1/deploy-cockroachdb-on-aws-insecure.md | 4 ++-- v19.1/deploy-cockroachdb-on-aws.md | 4 ++-- v19.2/deploy-cockroachdb-on-aws-insecure.md | 4 ++-- v19.2/deploy-cockroachdb-on-aws.md | 4 ++-- 6 files changed, 24 insertions(+), 16 deletions(-) diff --git a/_includes/v19.1/prod-deployment/synchronize-clocks.md b/_includes/v19.1/prod-deployment/synchronize-clocks.md index 775985fc6d4..9d0fed14d5d 100644 --- a/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.1/prod-deployment/synchronize-clocks.md @@ -60,7 +60,9 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -78,7 +80,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} @@ -87,7 +89,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -159,7 +161,9 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index 775985fc6d4..9d0fed14d5d 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -60,7 +60,9 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -78,7 +80,7 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} @@ -87,7 +89,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 - [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -159,7 +161,9 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because it handles "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the Production Checklist for details.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/v19.1/deploy-cockroachdb-on-aws-insecure.md b/v19.1/deploy-cockroachdb-on-aws-insecure.md index 1865753c7e8..7ea4c39a6d9 100644 --- a/v19.1/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.1/deploy-cockroachdb-on-aws-insecure.md @@ -110,8 +110,8 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: - - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - Set the load balancer port to **26257**. - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. diff --git a/v19.1/deploy-cockroachdb-on-aws.md b/v19.1/deploy-cockroachdb-on-aws.md index 3698d568bbd..ca4ec69e42d 100644 --- a/v19.1/deploy-cockroachdb-on-aws.md +++ b/v19.1/deploy-cockroachdb-on-aws.md @@ -113,8 +113,8 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: - - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - Set the load balancer port to **26257**. - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. diff --git a/v19.2/deploy-cockroachdb-on-aws-insecure.md b/v19.2/deploy-cockroachdb-on-aws-insecure.md index c2d2cd1db3d..aa038d29857 100644 --- a/v19.2/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.2/deploy-cockroachdb-on-aws-insecure.md @@ -110,8 +110,8 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: - - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - Set the load balancer port to **26257**. - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. diff --git a/v19.2/deploy-cockroachdb-on-aws.md b/v19.2/deploy-cockroachdb-on-aws.md index 3698d568bbd..ca4ec69e42d 100644 --- a/v19.2/deploy-cockroachdb-on-aws.md +++ b/v19.2/deploy-cockroachdb-on-aws.md @@ -113,8 +113,8 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html#scale-and-load-balance). Be sure to: - - Select a **Network Load Balancer** (not an Application Load Balancer, as in the above instructions) and use the ports we specify below. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - Set the load balancer port to **26257**. - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances.