Skip to content

Latest commit

 

History

History
165 lines (101 loc) · 8.82 KB

deploy-cockroachdb-on-aws.md

File metadata and controls

165 lines (101 loc) · 8.82 KB
title summary toc toc_not_nested ssh-link
Deploy CockroachDB on AWS EC2
Learn how to deploy CockroachDB on Amazon's AWS EC2 platform.
true
true
Secure Insecure

This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic.

If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.

Requirements

{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %}

Recommendations

{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %}

  • All Amazon EC2 instances running CockroachDB should be members of the same security group.

Step 1. Create instances

Open the Amazon EC2 console and launch an instance for each node you plan to have in your cluster. If you plan to run our sample workload against the cluster, create a separate instance for that workload.

  • Run at least 3 nodes to ensure survivability.

  • Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not.

  • Use m (general purpose), c (compute-optimized), or i (storage-optimized) instance types, with SSD-backed EBS volumes or Instance Store volumes. For example, Cockroach Labs has used c5d.4xlarge (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing.

  • Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group.

  • Make sure all your instances are in the same security group.

    • If you are creating a new security group, add the inbound rules from the next step. Otherwise note the ID of the security group.
  • When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands.

For more details, see Hardware Recommendations and Cluster Topology.

Step 2. Configure your network

CockroachDB requires TCP communication on two ports:

  • 26257 for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes
  • 8080 for exposing your Admin UI, and for routing from the load balancer to the health check

If you haven't already done so, create inbound rules for your security group.

Inter-node and load balancer-node communication

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 26257
Source The ID of your security group (e.g., sg-07ab277a)

Application data

Field Recommended Value
Type Custom TCP Rules
Protocol TCP
Port Range 26257
Source Your application's IP ranges

If you plan to run our sample workload on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance.

Admin UI

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 8080
Source Your network's IP ranges

You can set your network IP by selecting "My IP" in the Source field.

Load balancer-health check communication

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 8080
Source The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16)

To get the IP range of a VPC, open the Amazon VPC console and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console.

Step 3. Synchronize clocks

{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %}

Step 4. Set up load balancing

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

AWS offers fully-managed load balancing to distribute traffic between instances.

  1. Add AWS load balancing. Be sure to:
    • Select a Network Load Balancer (not an Application Load Balancer, as in the above instructions) and use the ports we specify below.
    • Select the VPC and all availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console.
    • Set the load balancer port to 26257.
    • Create a new target group that uses TCP port 26257. Traffic from your load balancer is routed to this target group, which contains your instances.
    • Configure health checks to use HTTP port 8080 and path /health?ready=1. This health endpoint ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests.
    • Register your instances with the target group you created, specifying port 26257. You can add and remove instances later.
  2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) IP address for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name.

{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}}

Step 5. Generate certificates

{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %}

Step 6. Start nodes

{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %}

Step 7. Initialize the cluster

{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %}

Step 8. Test your cluster

{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %}

Step 9. Run a sample workload

{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %}

Step 10. Monitor the cluster

In the Target Groups section of the Amazon EC2 console, check the health of your instances by inspecting your target group and opening the Targets tab.

{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %}

Step 11. Scale the cluster

Before adding a new node, create a new instance as you did earlier. Then generate and upload a certificate and key for the new node.

{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %}

Step 12. Use the database

{% include {{ page.version.version }}/prod-deployment/use-cluster.md %}

See also

{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %}