Skip to content

Commit

Permalink
Merge pull request #5677 from cockroachdb/aws-deploy-docs
Browse files Browse the repository at this point in the history
refresh AWS deployment doc + relevant includes
  • Loading branch information
taroface authored Nov 6, 2019
2 parents ca534d2 + 2b54ece commit a2e5404
Show file tree
Hide file tree
Showing 22 changed files with 521 additions and 349 deletions.
13 changes: 4 additions & 9 deletions _includes/v19.1/faq/clock-synchronization-effects.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,11 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim

### Considerations

There are important considerations when setting up clock synchronization:
When setting up clock synchronization:

- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server.

{{site.data.alerts.callout_info}}
Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead.
{{site.data.alerts.end}}

- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster.
- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing).
- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should.
- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
- Do not run more than one clock sync service on VMs where `cockroach` is running.

### Tutorials
Expand Down
8 changes: 4 additions & 4 deletions _includes/v19.1/prod-deployment/insecure-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,16 @@ For each additional node you want to add to the cluster, complete the following

If you get a permissions error, prefix the command with `sudo`.

4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
$ cockroach start \
--insecure \
--advertise-addr=<node4 address> \
--locality=<key-value pairs> \
--join=<node1 address>,<node2 address>,<node3 address> \
--cache=.25 \
--max-sql-memory=.25 \
--join=<node1 address>,<node2 address>,<node3 address> \
--background
~~~
Expand Down
23 changes: 8 additions & 15 deletions _includes/v19.1/prod-deployment/insecure-test-cluster.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.

To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node:
Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure --host=<address of any node>
$ cockroach sql --insecure --host=<address of load balancer>
~~~

2. Create an `insecurenodetest` database:
Expand All @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
> CREATE DATABASE insecurenodetest;
~~~

3. Use `\q` or `ctrl-d` to exit the SQL shell.

4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure --host=<address of different node>
~~~

5. View the cluster's databases, which will include `insecurenodetest`:
3. View the cluster's databases, which will include `insecurenodetest`:

{% include copy-clipboard.html %}
~~~ sql
Expand All @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
(5 rows)
~~~

6. Use `\q` to exit the SQL shell.
4. Use `\q` to exit the SQL shell.
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several
--init \
--duration=20m \
--tolerate-errors \
"postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=disable"
"postgresql://root@<IP ADDRESS OF LOAD BALANCER>:26257/tpcc?sslmode=disable"
~~~

This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
Expand Down
158 changes: 97 additions & 61 deletions _includes/v19.1/prod-deployment/secure-generate-certificates.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,44 +27,65 @@ Locally, you'll need to [create the following certificates and keys](create-secu
3. Create the CA certificate and key:
{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-ca \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-ca \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-node \
<node1 internal IP address> \
<node1 external IP address> \
<node1 hostname> \
<other common names for node1> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-node \
<node1 internal IP address> \
<node1 external IP address> \
<node1 hostname> \
<other common names for node1> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
5. Upload the CA certificate and node certificate and key to the first node:
{% if page.title contains "AWS" %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node1 address> "mkdir certs"
~~~
~~~ shell
$ ssh-add /path/<key file>.pem
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 address>:~/certs
~~~
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node1 DNS name> "mkdir certs"
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 DNS name>:~/certs
~~~
{% else %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node1 address> "mkdir certs"
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 address>:~/certs
~~~
{% endif %}
6. Delete the local copy of the node certificate and key:
Expand All @@ -78,48 +99,63 @@ Locally, you'll need to [create the following certificates and keys](create-secu
7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-node \
<node2 internal IP address> \
<node2 external IP address> \
<node2 hostname> \
<other common names for node2> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-node \
<node2 internal IP address> \
<node2 external IP address> \
<node2 hostname> \
<other common names for node2> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
8. Upload the CA certificate and node certificate and key to the second node:
{% if page.title contains "AWS" %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node2 address> "mkdir certs"
~~~
~~~ shell
$ ssh <username>@<node2 DNS name> "mkdir certs"
~~~
{% include copy-clipboard.html %}
~~~ shell
# Upload the CA certificate and node certificate and key:
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 address>:~/certs
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 DNS name>:~/certs
~~~
{% else %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node2 address> "mkdir certs"
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 address>:~/certs
~~~
{% endif %}
9. Repeat steps 6 - 8 for each additional node.
10. Create a client certificate and key for the `root` user:
{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-client \
root \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-client \
root \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload:
Expand All @@ -140,4 +176,4 @@ Locally, you'll need to [create the following certificates and keys](create-secu
{{site.data.alerts.callout_info}}
On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster).
{{site.data.alerts.end}}
{{site.data.alerts.end}}
5 changes: 2 additions & 3 deletions _includes/v19.1/prod-deployment/secure-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following

If you get a permissions error, prefix the command with `sudo`.

4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
{% include copy-clipboard.html %}
~~~ shell
$ cockroach start \
--certs-dir=certs \
--advertise-addr=<node4 address> \
--locality=<key-value pairs> \
--join=<node1 address>,<node2 address>,<node3 address> \
--cache=.25 \
--max-sql-memory=.25 \
--join=<node1 address>,<node2 address>,<node3 address> \
--background
~~~
Expand Down
23 changes: 8 additions & 15 deletions _includes/v19.1/prod-deployment/secure-test-cluster.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.

To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.

1. On your local machine, launch the built-in SQL client:
Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --certs-dir=certs --host=<address of any node>
$ cockroach sql --certs-dir=certs --host=<address of load balancer>
~~~

2. Create a `securenodetest` database:
Expand All @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
> CREATE DATABASE securenodetest;
~~~

3. Use `\q` to exit the SQL shell.

4. Launch the built-in SQL client against a different node:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --certs-dir=certs --host=<address of different node>
~~~

5. View the cluster's databases, which will include `securenodetest`:
3. View the cluster's databases, which will include `securenodetest`:
{% include copy-clipboard.html %}
~~~ sql
Expand All @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
(5 rows)
~~~
6. Use `\q` to exit the SQL shell.
4. Use `\q` to exit the SQL shell.
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several
--init \
--duration=20m \
--tolerate-errors \
"postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
"postgresql://root@<IP ADDRESS OF LOAD BALANCER>:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
~~~

This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
Expand Down
Loading

0 comments on commit a2e5404

Please sign in to comment.