Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refresh AWS deployment doc + relevant includes #5677

Merged
merged 6 commits into from
Nov 6, 2019
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions _includes/v19.1/faq/clock-synchronization-effects.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,16 @@ The one rare case to note is when a node's clock suddenly jumps beyond the maxim

### Considerations

There are important considerations when setting up clock synchronization:
When setting up clock synchronization:

- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server.

{{site.data.alerts.callout_info}}
Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead.
Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP.
{{site.data.alerts.end}}

- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster.
- In a hybrid cluster, GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and AWS machines should use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). The Google and Amazon services handle ["smearing" the leap second](https://developers.google.com/time/smear) in compatible ways.
- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
- Do not run more than one clock sync service on VMs where `cockroach` is running.

### Tutorials
Expand Down
7 changes: 3 additions & 4 deletions _includes/v19.1/prod-deployment/insecure-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,15 @@ For each additional node you want to add to the cluster, complete the following

If you get a permissions error, prefix the command with `sudo`.

4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier).

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
$ cockroach start \
--advertise-addr=<node4 address> \
--locality=<key-value pairs> \
--join=<node1 address>,<node2 address>,<node3 address> \
--cache=.25 \
--max-sql-memory=.25 \
--join=<node1 address>,<node2 address>,<node3 address> \
--background
~~~

Expand Down
23 changes: 8 additions & 15 deletions _includes/v19.1/prod-deployment/insecure-test-cluster.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.

To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes.

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node:
Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure --host=<address of any node>
$ cockroach sql --insecure --host=<address of load balancer>
~~~

2. Create an `insecurenodetest` database:
Expand All @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
> CREATE DATABASE insecurenodetest;
~~~

3. Use `\q` or `ctrl-d` to exit the SQL shell.

4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure --host=<address of different node>
~~~

5. View the cluster's databases, which will include `insecurenodetest`:
3. View the cluster's databases, which will include `insecurenodetest`:

{% include copy-clipboard.html %}
~~~ sql
Expand All @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
(5 rows)
~~~

6. Use `\q` to exit the SQL shell.
4. Use `\q` to exit the SQL shell.
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several
--init \
--duration=20m \
--tolerate-errors \
"postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=disable"
"postgresql://root@<IP ADDRESS OF LOAD BALANCER>:26257/tpcc?sslmode=disable"
~~~

This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
Expand Down
153 changes: 93 additions & 60 deletions _includes/v19.1/prod-deployment/secure-generate-certificates.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,44 +27,61 @@ Locally, you'll need to [create the following certificates and keys](create-secu
3. Create the CA certificate and key:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-ca \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-ca \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~

4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-node \
<node1 internal IP address> \
<node1 external IP address> \
<node1 hostname> \
<other common names for node1> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-node \
<node1 internal IP address> \
<node1 external IP address> \
<node1 hostname> \
<other common names for node1> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~

5. Upload the CA certificate and node certificate and key to the first node:

{% if page.title contains "AWS" %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh -i /path/<key file>.pem <username>@<node1 DNS name> "mkdir certs"
~~~

{% include copy-clipboard.html %}
~~~ shell
$ scp -i /path/<key file>.pem \
certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 DNS name>:~/certs
~~~

{% else %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node1 address> "mkdir certs"
~~~
~~~ shell
$ ssh <username>@<node1 address> "mkdir certs"
~~~

{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 address>:~/certs
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 address>:~/certs
~~~
{% endif %}

6. Delete the local copy of the node certificate and key:

Expand All @@ -78,48 +95,64 @@ Locally, you'll need to [create the following certificates and keys](create-secu
7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-node \
<node2 internal IP address> \
<node2 external IP address> \
<node2 hostname> \
<other common names for node2> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-node \
<node2 internal IP address> \
<node2 external IP address> \
<node2 hostname> \
<other common names for node2> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~

8. Upload the CA certificate and node certificate and key to the second node:

{% if page.title contains "AWS" %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh -i /path/<key file>.pem <username>@<node2 DNS name> "mkdir certs"
~~~

{% include copy-clipboard.html %}
~~~ shell
$ scp -i /path/<key file>.pem \
certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 DNS name>:~/certs
~~~

{% else %}
{% include copy-clipboard.html %}
~~~ shell
$ ssh <username>@<node2 address> "mkdir certs"
~~~
~~~ shell
$ ssh <username>@<node2 address> "mkdir certs"
~~~

{% include copy-clipboard.html %}
~~~ shell
# Upload the CA certificate and node certificate and key:
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 address>:~/certs
~~~
{% include copy-clipboard.html %}
~~~ shell
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 address>:~/certs
~~~
{% endif %}

9. Repeat steps 6 - 8 for each additional node.

10. Create a client certificate and key for the `root` user:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach cert create-client \
root \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~
~~~ shell
$ cockroach cert create-client \
root \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
~~~

11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload:

Expand Down
5 changes: 2 additions & 3 deletions _includes/v19.1/prod-deployment/secure-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,16 @@ For each additional node you want to add to the cluster, complete the following

If you get a permissions error, prefix the command with `sudo`.

4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
4. Run the [`cockroach start`](start-a-node.html) command, pointing `--advertise-addr` to the new node and `--join` to the three existing nodes (also include `--locality` if you set it earlier).

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start \
--certs-dir=certs \
--advertise-addr=<node4 address> \
--locality=<key-value pairs> \
--join=<node1 address>,<node2 address>,<node3 address> \
--cache=.25 \
--max-sql-memory=.25 \
--join=<node1 address>,<node2 address>,<node3 address> \
--background
~~~

Expand Down
23 changes: 8 additions & 15 deletions _includes/v19.1/prod-deployment/secure-test-cluster.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.

To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
When using a load balancer, you must issue commands directly to the load balancer, which then routes traffic to the nodes.

1. On your local machine, launch the built-in SQL client:
Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:

1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --certs-dir=certs --host=<address of any node>
$ cockroach sql --certs-dir=certs --host=<address of load balancer>
~~~

2. Create a `securenodetest` database:
Expand All @@ -16,16 +18,7 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
> CREATE DATABASE securenodetest;
~~~

3. Use `\q` to exit the SQL shell.

4. Launch the built-in SQL client against a different node:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --certs-dir=certs --host=<address of different node>
~~~

5. View the cluster's databases, which will include `securenodetest`:
3. View the cluster's databases, which will include `securenodetest`:

{% include copy-clipboard.html %}
~~~ sql
Expand All @@ -45,4 +38,4 @@ To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) lo
(5 rows)
~~~

6. Use `\q` to exit the SQL shell.
4. Use `\q` to exit the SQL shell.
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ CockroachDB offers a pre-built `workload` binary for Linux that includes several
--init \
--duration=20m \
--tolerate-errors \
"postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
"postgresql://root@<IP ADDRESS OF LOAD BALANCER>:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
~~~

This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
Expand Down
20 changes: 15 additions & 5 deletions _includes/v19.1/prod-deployment/synchronize-clocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,19 +73,29 @@ CockroachDB requires moderate levels of [clock synchronization](recommended-prod

7. Repeat these steps for each machine where a CockroachDB node will run.

{% elsif page.title contains "Google" %}
{% elsif page.title contains "Google" %}

Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should:

- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances).
- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks).
- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances).
- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, all AWS machines should use the internal [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service), and all other non-GCE machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks).

{{site.data.alerts.callout_info}}
The Google and Amazon services handle <a href="https://developers.google.com/time/smear">"smearing" the leap second</a> in compatible ways.
{{site.data.alerts.end}}

{% elsif page.title contains "AWS" %}

Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.

- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles <a href="https://developers.google.com/time/smear">"smearing" the leap second</a>.
- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
- Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out.
- To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server.
- If you plan to run a hybrid cluster across AWS and other cloud providers or environments, all GCE machines should use [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances) and all other non-AWS machines should use [Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks).

{{site.data.alerts.callout_info}}
The Google and Amazon services handle <a href="https://developers.google.com/time/smear">"smearing" the leap second</a> in compatible ways.
{{site.data.alerts.end}}

{% elsif page.title contains "Azure" %}

Expand Down
Loading