diff --git a/_includes/v19.1/faq/clock-synchronization-effects.md b/_includes/v19.1/faq/clock-synchronization-effects.md index 4e7ef72b4ab..d0849e7d3a9 100644 --- a/_includes/v19.1/faq/clock-synchronization-effects.md +++ b/_includes/v19.1/faq/clock-synchronization-effects.md @@ -4,16 +4,11 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim ### Considerations -There are important considerations when setting up clock synchronization: +When setting up clock synchronization: -- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. - - {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead. - {{site.data.alerts.end}} - -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster. +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). +- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. +- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md index 9b2bfcbfbc6..6a430037787 100644 --- a/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-scale-cluster.md @@ -29,16 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell - $ cockroach start --insecure \ + $ cockroach start \ + --insecure \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.1/prod-deployment/insecure-test-cluster.md b/_includes/v19.1/prod-deployment/insecure-test-cluster.md index 307b8f999b9..faece96fa42 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/insecure-test-cluster.md @@ -1,12 +1,14 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --insecure --host=
+ $ cockroach sql --insecure --host=
~~~ 2. Create an `insecurenodetest` database: @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE insecurenodetest; ~~~ -3. Use `\q` or `ctrl-d` to exit the SQL shell. - -4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -5. View the cluster's databases, which will include `insecurenodetest`: +3. View the cluster's databases, which will include `insecurenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. diff --git a/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md b/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md index e4369b54410..9e594e0a864 100644 --- a/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md +++ b/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=disable" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.1/prod-deployment/secure-generate-certificates.md b/_includes/v19.1/prod-deployment/secure-generate-certificates.md index abbd4a331eb..41d3ae28beb 100644 --- a/_includes/v19.1/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.1/prod-deployment/secure-generate-certificates.md @@ -27,44 +27,65 @@ Locally, you'll need to [create the following certificates and keys](create-secu 3. Create the CA certificate and key: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-ca \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 5. Upload the CA certificate and node certificate and key to the first node: + {% if page.title contains "AWS" %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh-add /path/.pem + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 6. Delete the local copy of the node certificate and key: @@ -78,48 +99,63 @@ Locally, you'll need to [create the following certificates and keys](create-secu 7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 8. Upload the CA certificate and node certificate and key to the second node: + {% if page.title contains "AWS" %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 9. Repeat steps 6 - 8 for each additional node. 10. Create a client certificate and key for the `root` user: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-client \ + root \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: @@ -140,4 +176,4 @@ Locally, you'll need to [create the following certificates and keys](create-secu {{site.data.alerts.callout_info}} On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). -{{site.data.alerts.end}} +{{site.data.alerts.end}} \ No newline at end of file diff --git a/_includes/v19.1/prod-deployment/secure-scale-cluster.md b/_includes/v19.1/prod-deployment/secure-scale-cluster.md index 20c6bb2b967..4446543395d 100644 --- a/_includes/v19.1/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-scale-cluster.md @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.1/prod-deployment/secure-test-cluster.md b/_includes/v19.1/prod-deployment/secure-test-cluster.md index ba8b3370bb1..25af5af0414 100644 --- a/_includes/v19.1/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.1/prod-deployment/secure-test-cluster.md @@ -1,12 +1,14 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --certs-dir=certs --host=
+ $ cockroach sql --certs-dir=certs --host=
~~~ 2. Create a `securenodetest` database: @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE securenodetest; ~~~ -3. Use `\q` to exit the SQL shell. - -4. Launch the built-in SQL client against a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -5. View the cluster's databases, which will include `securenodetest`: +3. View the cluster's databases, which will include `securenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/_includes/v19.1/prod-deployment/secure-test-load-balancing.md b/_includes/v19.1/prod-deployment/secure-test-load-balancing.md index 528669db93e..17eed4194a0 100644 --- a/_includes/v19.1/prod-deployment/secure-test-load-balancing.md +++ b/_includes/v19.1/prod-deployment/secure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.1/prod-deployment/synchronize-clocks.md b/_includes/v19.1/prod-deployment/synchronize-clocks.md index d796023946c..9d0fed14d5d 100644 --- a/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.1/prod-deployment/synchronize-clocks.md @@ -60,7 +60,9 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -73,19 +75,21 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod 7. Repeat these steps for each machine where a CockroachDB node will run. -{% elsif page.title contains "Google" %} +{% elsif page.title contains "Google" %} Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: -- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). +- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. -- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). -- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second. +- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). + - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. + - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -157,7 +161,9 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/_includes/v19.2/faq/clock-synchronization-effects.md b/_includes/v19.2/faq/clock-synchronization-effects.md index 4e7ef72b4ab..d0849e7d3a9 100644 --- a/_includes/v19.2/faq/clock-synchronization-effects.md +++ b/_includes/v19.2/faq/clock-synchronization-effects.md @@ -4,16 +4,11 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim ### Considerations -There are important considerations when setting up clock synchronization: +When setting up clock synchronization: -- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server. - - {{site.data.alerts.callout_info}} - Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead. - {{site.data.alerts.end}} - -- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster. +- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). +- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. +- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. - Do not run more than one clock sync service on VMs where `cockroach` is running. ### Tutorials diff --git a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md index 9b2bfcbfbc6..6a430037787 100644 --- a/_includes/v19.2/prod-deployment/insecure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-scale-cluster.md @@ -29,16 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell - $ cockroach start --insecure \ + $ cockroach start \ + --insecure \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.2/prod-deployment/insecure-test-cluster.md b/_includes/v19.2/prod-deployment/insecure-test-cluster.md index 307b8f999b9..83c31569efc 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/insecure-test-cluster.md @@ -1,48 +1,41 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ + ~~~ shell + $ cockroach sql --insecure --host=
+ ~~~ -3. Use `\q` or `ctrl-d` to exit the SQL shell. - -4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node: +2. Create an `insecurenodetest` database: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ + ~~~ sql + > CREATE DATABASE insecurenodetest; + ~~~ -5. View the cluster's databases, which will include `insecurenodetest`: +3. View the cluster's databases, which will include `insecurenodetest`: {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -6. Use `\q` to exit the SQL shell. + ~~~ sql + > SHOW DATABASES; + ~~~ + + ~~~ + +--------------------+ + | Database | + +--------------------+ + | crdb_internal | + | information_schema | + | insecurenodetest | + | pg_catalog | + | system | + +--------------------+ + (5 rows) + ~~~ + +4. Use `\q` to exit the SQL shell. diff --git a/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md b/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md index e4369b54410..9e594e0a864 100644 --- a/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md +++ b/_includes/v19.2/prod-deployment/insecure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=disable" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.2/prod-deployment/secure-generate-certificates.md b/_includes/v19.2/prod-deployment/secure-generate-certificates.md index abbd4a331eb..41d3ae28beb 100644 --- a/_includes/v19.2/prod-deployment/secure-generate-certificates.md +++ b/_includes/v19.2/prod-deployment/secure-generate-certificates.md @@ -27,44 +27,65 @@ Locally, you'll need to [create the following certificates and keys](create-secu 3. Create the CA certificate and key: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-ca \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 5. Upload the CA certificate and node certificate and key to the first node: + {% if page.title contains "AWS" %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh-add /path/.pem + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 6. Delete the local copy of the node certificate and key: @@ -78,48 +99,63 @@ Locally, you'll need to [create the following certificates and keys](create-secu 7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-node \ + \ + \ + \ + \ + localhost \ + 127.0.0.1 \ + \ + \ + \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 8. Upload the CA certificate and node certificate and key to the second node: + {% if page.title contains "AWS" %} {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + + {% else %} + {% include copy-clipboard.html %} + ~~~ shell + $ ssh @ "mkdir certs" + ~~~ + + {% include copy-clipboard.html %} + ~~~ shell + $ scp certs/ca.crt \ + certs/node.crt \ + certs/node.key \ + @:~/certs + ~~~ + {% endif %} 9. Repeat steps 6 - 8 for each additional node. 10. Create a client certificate and key for the `root` user: {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ + ~~~ shell + $ cockroach cert create-client \ + root \ + --certs-dir=certs \ + --ca-key=my-safe-directory/ca.key + ~~~ 11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: @@ -140,4 +176,4 @@ Locally, you'll need to [create the following certificates and keys](create-secu {{site.data.alerts.callout_info}} On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). -{{site.data.alerts.end}} +{{site.data.alerts.end}} \ No newline at end of file diff --git a/_includes/v19.2/prod-deployment/secure-scale-cluster.md b/_includes/v19.2/prod-deployment/secure-scale-cluster.md index 20c6bb2b967..4446543395d 100644 --- a/_includes/v19.2/prod-deployment/secure-scale-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-scale-cluster.md @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following If you get a permissions error, prefix the command with `sudo`. -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: +4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). {% include copy-clipboard.html %} ~~~ shell $ cockroach start \ --certs-dir=certs \ --advertise-addr= \ - --locality= \ + --join=,, \ --cache=.25 \ --max-sql-memory=.25 \ - --join=,, \ --background ~~~ diff --git a/_includes/v19.2/prod-deployment/secure-test-cluster.md b/_includes/v19.2/prod-deployment/secure-test-cluster.md index ba8b3370bb1..25af5af0414 100644 --- a/_includes/v19.2/prod-deployment/secure-test-cluster.md +++ b/_includes/v19.2/prod-deployment/secure-test-cluster.md @@ -1,12 +1,14 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. +CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: +When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. -1. On your local machine, launch the built-in SQL client: +Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: + +1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: {% include copy-clipboard.html %} ~~~ shell - $ cockroach sql --certs-dir=certs --host=
+ $ cockroach sql --certs-dir=certs --host=
~~~ 2. Create a `securenodetest` database: @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo > CREATE DATABASE securenodetest; ~~~ -3. Use `\q` to exit the SQL shell. - -4. Launch the built-in SQL client against a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -5. View the cluster's databases, which will include `securenodetest`: +3. View the cluster's databases, which will include `securenodetest`: {% include copy-clipboard.html %} ~~~ sql @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo (5 rows) ~~~ -6. Use `\q` to exit the SQL shell. +4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/_includes/v19.2/prod-deployment/secure-test-load-balancing.md b/_includes/v19.2/prod-deployment/secure-test-load-balancing.md index 528669db93e..17eed4194a0 100644 --- a/_includes/v19.2/prod-deployment/secure-test-load-balancing.md +++ b/_includes/v19.2/prod-deployment/secure-test-load-balancing.md @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several --init \ --duration=20m \ --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key" ~~~ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. diff --git a/_includes/v19.2/prod-deployment/synchronize-clocks.md b/_includes/v19.2/prod-deployment/synchronize-clocks.md index d796023946c..9d0fed14d5d 100644 --- a/_includes/v19.2/prod-deployment/synchronize-clocks.md +++ b/_includes/v19.2/prod-deployment/synchronize-clocks.md @@ -60,7 +60,9 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 6. Verify that the machine is using a Google NTP server: @@ -73,19 +75,21 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod 7. Repeat these steps for each machine where a CockroachDB node will run. -{% elsif page.title contains "Google" %} +{% elsif page.title contains "Google" %} Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: -- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). +- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "AWS" %} Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. -- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). -- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second. +- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). + - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. + - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. +- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. {% elsif page.title contains "Azure" %} @@ -157,7 +161,9 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2 $ sudo service ntp start ~~~ - {{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}} + {{site.data.alerts.callout_info}} + We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. + {{site.data.alerts.end}} 7. Verify that the machine is using a Google NTP server: diff --git a/v19.1/deploy-cockroachdb-on-aws-insecure.md b/v19.1/deploy-cockroachdb-on-aws-insecure.md index 369bda7b97f..7ea4c39a6d9 100644 --- a/v19.1/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.1/deploy-cockroachdb-on-aws-insecure.md @@ -23,16 +23,36 @@ This page shows you how to manually deploy an insecure multi-node CockroachDB cl {% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-8-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -41,16 +61,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -61,17 +72,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-8-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI + + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). +You can set your network IP by selecting "My IP" in the Source field. -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +#### Load balancer-health check communication -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -87,10 +110,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. + - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -112,10 +139,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 9. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 10. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. + {% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} ## Step 11. Use the cluster diff --git a/v19.1/deploy-cockroachdb-on-aws.md b/v19.1/deploy-cockroachdb-on-aws.md index 4bd2410c8cd..ca4ec69e42d 100644 --- a/v19.1/deploy-cockroachdb-on-aws.md +++ b/v19.1/deploy-cockroachdb-on-aws.md @@ -24,16 +24,38 @@ If you are only testing CockroachDB, or you are not concerned with protecting ne {% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-9-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +- When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -42,16 +64,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -62,17 +75,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-9-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI -- Run at least 3 nodes to ensure survivability. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +You can set your network IP by selecting "My IP" in the Source field. -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. +#### Load balancer-health check communication -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) + +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -88,10 +113,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -117,10 +146,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 10. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 11. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. Then [generate and upload a certificate and key](#step-5-generate-certificates) for the new node. + {% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} ## Step 12. Use the database diff --git a/v19.2/deploy-cockroachdb-on-aws-insecure.md b/v19.2/deploy-cockroachdb-on-aws-insecure.md index 21f279b1fb6..aa038d29857 100644 --- a/v19.2/deploy-cockroachdb-on-aws-insecure.md +++ b/v19.2/deploy-cockroachdb-on-aws-insecure.md @@ -23,16 +23,36 @@ This page shows you how to manually deploy an insecure multi-node CockroachDB cl {% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-8-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -41,16 +61,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -61,17 +72,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-8-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI + + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). +You can set your network IP by selecting "My IP" in the Source field. -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +#### Load balancer-health check communication -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -87,10 +110,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. + - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -112,10 +139,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 9. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 10. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. + {% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} ## Step 11. Use the cluster @@ -123,7 +154,7 @@ AWS offers fully-managed load balancing to distribute traffic between instances. Now that your deployment is working, you can: 1. [Implement your data model](sql-statements.html). -2. [Create users](create-user.html) and [grant them privileges](grant.html). +2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). 3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the AWS load balancer, not to a CockroachDB node. ## See also diff --git a/v19.2/deploy-cockroachdb-on-aws.md b/v19.2/deploy-cockroachdb-on-aws.md index 4bd2410c8cd..ca4ec69e42d 100644 --- a/v19.2/deploy-cockroachdb-on-aws.md +++ b/v19.2/deploy-cockroachdb-on-aws.md @@ -24,16 +24,38 @@ If you are only testing CockroachDB, or you are not concerned with protecting ne {% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} -- All instances running CockroachDB should be members of the same Security Group. +- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). -## Step 1. Configure your network +## Step 1. Create instances + +Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-9-run-a-sample-workload) against the cluster, create a separate instance for that workload. + +- Run at least 3 nodes to ensure survivability. + +- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. + +- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. + + - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. + +- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. + +- Make sure all your instances are in the same security group. + + - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. + +- When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands. + +For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + +## Step 2. Configure your network CockroachDB requires TCP communication on two ports: - `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI +- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). +If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. #### Inter-node and load balancer-node communication @@ -42,16 +64,7 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Type | Custom TCP Rule Protocol | TCP Port Range | **26257** - Source | The name of your security group (e.g., *sg-07ab277a*) - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges + Source | The ID of your security group (e.g., *sg-07ab277a*) #### Application data @@ -62,17 +75,29 @@ You can create these rules using [Security Groups' Inbound Rules](http://docs.aw Port Range | **26257** Source | Your application's IP ranges -## Step 2. Create instances +If you plan to [run our sample workload](#step-9-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. +#### Admin UI -- Run at least 3 nodes to ensure survivability. + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | Your network's IP ranges -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. +You can set your network IP by selecting "My IP" in the Source field. -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. +#### Load balancer-health check communication -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). + Field | Recommended Value +-------|------------------- + Type | Custom TCP Rule + Protocol | TCP + Port Range | **8080** + Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) + +To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. ## Step 3. Synchronize clocks @@ -88,10 +113,14 @@ Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to AWS offers fully-managed load balancing to distribute traffic between instances. -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. +1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: + - Select a **Network Load Balancer** and use the ports we specify below. + - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. + - Set the load balancer port to **26257**. + - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. + - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. +2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. {{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} @@ -117,10 +146,14 @@ AWS offers fully-managed load balancing to distribute traffic between instances. ## Step 10. Monitor the cluster +In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. + {% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} ## Step 11. Scale the cluster +Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. Then [generate and upload a certificate and key](#step-5-generate-certificates) for the new node. + {% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} ## Step 12. Use the database