Terraform modules for running XRd in cloud environments.
Currently this only covers running XRd in an AWS EKS cluster.
This repository is intended to be used for two purposes:
- As an illustrative example of what's needed to set up XRd in AWS EKS, to be read alongside the other XRd documentation.
- As a simple way to launch a dummy XRd deployment in AWS EKS for experimentation and exploration.
Use of these modules in production is not supported by Cisco.
The following CLI tools are required to use the modules and scripts in this repository:
The packer
tool is required if you want to
use the quick start script, see the AMI section below for more details.
In addition, the following tools are recommended:
The Terraform modules in this repository rely on the AMIs used for worker nodes to be optimized for XRd. The easiest way to achieve this is to use the XRd Packer templates to generate an AMI suitable for running XRd.
The Terraform module will automatically pick up AMIs generated by this tool in your AWS account.
N.B. These AMIs are tied to a particular Kubernetes version, so multiple AMIs may be required if you want to run EKS clusters using multiple different Kubernetes versions.
The quick start script will build an AMI using this repository for you if
one isn't detected in your AWS account. To run this, the
packer
tool must also be installed.
To bring up a dummy XRd deployment in XRd EKS, the following steps are required:
- Create an AWS ECR repository, an upload an XRd container image to it.
- First, an XRd vRouter image should be obtained from Cisco.
- Then, the
publish-ecr
script in this repository can be used to created the repository and upload the image. - Example:
./publish-ecr xrd-vrouter-container-x86.7.9.1.tgz
- Run the
aws-quickstart
script.- This has three mandatory arguments: the username and password to be used for the XRd root user, and a comma-separated list of IPv4 CIDR blocks to allow SSH access to the Bastion instance.
- This will first build an AMI using the XRd Packer templates if one is not detected.
- Example:
./aws-quickstart -u user -p password -b 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
This will bring up an EKS cluster called 'xrd-cluster', some worker nodes,
and a dummy topology with a pair of back-to-back XRd instances running an
overlay network between them - the overlay
example topology described below.
To interact with the cluster, run:
aws eks update-kubeconfig --name $(terraform -chdir=examples/overlay/workload output -raw cluster_name)
and then interact with the cluster as normal using kubectl
commands, e.g. to
get to an XRd console you can use:
kubectl exec -it xrd1-xrd-vrouter-0 -- xr
N.B. In production use-cases it's recommended to set up SSH access to XRd routers, this is just intended for lab usage.
To tear down all of the resources, run:
./aws-quickstart --destroy
Several example Terraform configurations are provided; each of these either creates a set of cloud infrastructure resources, or a set of workload resources.
Together a stack of example configurations forms a complete and functional set of resources, representing an example XRd deployment.
- Bootstrap: this configuration forms a common base; other example configurations are layered on top of this base.
- Singleton: this runs an XRd Control Plane, or XRd vRouter workload on a single worker node.
- Overlay: this launches three worker nodes, and deploys a pair of back-to-back XRd vRouter instances running an overlay network using GRE, IS-IS and L3VPN, as well as a pair of Linux containers that communicate via the overlay network.
- HA: this demonstrates the use of XRd vRouter as a redundant Cloud Router.
To launch an example, first make sure you have met all the requirements listed above, including having an AMI suitable for running the required XRd platform available.
Once you have satisfied the requirements, each example can be launched like any any other Terraform configuration. The Bootstrap configuration serves as a base for other configurations; an infrastructure configuration should be layered on top of the Bootstrap configuration, and a workload configuration should be layered on top of the associated infrastructure configuration.
The following sections walk through instantiating the Overlay example. More details on all the Terraform command options can be found in the Terraform CLI documentation.
Firstly, clone this repository.
The Bootstrap configuration must be run first; this provisions a VPC, EKS cluster, and Bastion node (for worker node access).
terraform -chdir=examples/bootstrap init
terraform -chdir=examples/bootstrap apply
This accepts a number of input variables described in
variables.tf
. In particular, the
bastion_remote_access_cidr_blocks
variable is required, which is a list of
IPv4 CIDR blocks to allow SSH access to the Bastion instance. Pass null
to
prevent access to the Bastion instance, or ["0.0.0.0/0"]
to allow SSH access
from any IPv4 address.
Terraform will show you a changeset and ask you to confirm that it should proceed. It takes around 15 minutes to bring up the configuration.
Then apply the Overlay infrastructure configuration. This creates the remaining cloud infrastructure resources necessary for running the Overlay workload.
terraform -chdir=examples/overlay/infra init
terraform -chdir=examples/overlay/infra apply
Finally apply the Overlay workload configuration. This accepts a number of
input variables described in
variables.tf
which may be of
interest; e.g., it is necessary to set the IOS XR root username and password.
To do so, create a file vars.tfvars
with the desired configuration options
following the variable definitions file
format.
cat << EOF > vars.tfvars
xr_root_user = "user"
xr_root_password = "password"
EOF
terraform -chdir=examples/overlay/workload init
terraform -chdir=examples/overlay/workload apply -var-file=$PWD/vars.tfvars
Configuration options can also be configured on the CLI.
It should take less than a minute to apply the workload configuration. When this is complete, you may then configure kubectl
so that you can connect to the cluster:
aws eks update-kubeconfig --name $(terraform -chdir=examples/overlay/workload output -raw cluster_name)
Once the topology has been launched, any changes you make to the configuration
can be applied by modifying the configuration file and re-running the
terraform -chdir=examples/overlay/workload apply -var-file=$PWD/vars.tfvars
command - Terraform will compute the minimal diff required to satisfy your new
configuration and apply it.
When you've finished with the topology, it can be torn down with:
terraform -chdir=examples/overlay/workload destroy -var-file=$PWD/vars.tfvars
terraform -chdir=examples/overlay/infra destroy
terraform -chdir=examples/bootstrap destroy -var=bastion_remote_access_cidr_blocks=null
N.B. It is recommended to pass the same configuration to terraform destroy
as were passed to terraform apply
: this ensures that any mandatory arguments
are set (even if their values don't matter) and means that any automatic
inference of values is the same (e.g. automatically picking up XRd
Packer AMIs at the correct cluster version, which will fail of no such
image is present even in destroy mode).
As well as the example configurations this repository contains several Terraform modules to assist with deploying XRd on AWS. These are useful for those wanting to construct their own Terraform configurations for running XRd workloads.
The following "building block" modules are provided in the repository:
- VPC
- EKS
- IRSA
- EC2 Key Pair
- Bastion Node
- Worker Node
Each of these modules is focused on bringing up a constrained set of AWS resources.
For more information on how to use these modules to build your own Terraform configurations, see the development page.
This section lists some common errors and how to fix them.
|
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the provider's
│ required arguments as described in the provider documentation.
│
╵
╷
│ Error: configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ AWS Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request canceled, context deadline exceeded
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on <empty> line 0:
│ (source code not available)
Your environment is not set up correctly to run AWS CLI commands.
Configure your environment with your AWS account details.
You may need to run aws configure
, or specify the AWS_PROFILE
environment
variable.
╷
│ Error: Your query returned no results. Please change your search criteria and try again.
│
│ with module.xrd_ami[0].data.aws_ami.this,
│ on ../../modules/aws/xrd-ami/main.tf line 12, in data "aws_ami" "this":
│ 12: data "aws_ami" "this" {
│
╵
There is no available AMI created by the XRd Packer templates with the requested Kubernetes version.
Either:
- Create an AMI at the correct version using the XRd Packer templates.
- Specify an AMI ID to use for the worker nodes by setting the
node_ami
Terraform variable.