Skip to content

Commit

Permalink
Describe how to deploy machine API with AWS machine controller over k…
Browse files Browse the repository at this point in the history
…ubernetes
  • Loading branch information
ingvagabund committed Jun 24, 2019
1 parent cf06d47 commit f123c8e
Show file tree
Hide file tree
Showing 17 changed files with 546 additions and 243 deletions.
184 changes: 137 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ One needs to run the `imagebuilder` command instead of the `docker build`.

Note: this info is RH only, it needs to be backported every time the `README.md` is synced with the upstream one.

## How to deploy and test the machine controller with minikube
## Deploy machine API plane with minikube

1. **Install kvm**

Expand All @@ -38,79 +38,169 @@ Note: this info is RH only, it needs to be backported every time the `README.md`

2. **Deploying the cluster**

Because of [cluster-api#475](https://github.com/kubernetes-sigs/cluster-api/issues/475) the minikube version can't be higher than `0.28.0`.
To install minikube `v0.28.0`, you can run:
To install minikube `v1.1.0`, you can run:

```sg
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```

To deploy the cluster:

```
minikube start --vm-driver kvm2
eval $(minikube docker-env)
$ minikube start --vm-driver kvm2 --kubernetes-version v1.13.1 --v 5
$ eval $(minikube docker-env)
```

3. **Building the machine controller**
3. **Deploying machine API controllers**

For development purposes the aws machine controller itself will run out of the machine API stack.
Otherwise, docker images needs to be built, pushed into a docker registry and deployed within the stack.

To deploy the stack:
```
$ make -C cmd/machine-controller
kustomize build config | kubectl apply -f -
```

4. **Deploying the cluster-api stack manifests**
4. **Deploy secret with AWS credentials**

Add your AWS credentials to the `addons.yaml` file (in base64
format). You can either do this manually or use the
`examples/render-aws-secrets.sh`.
AWS actuator assumes existence of a secret file (references in machine object) with base64 encoded credentials:

The easy deployment is:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials-secret
namespace: default
type: Opaque
data:
aws_access_key_id: FILLIN
aws_secret_access_key: FILLIN
```

```sh
./examples/render-aws-secrets.sh examples/addons.yaml | kubectl apply -f -
```
You can use `examples/render-aws-secrets.sh` script to generate the secret:
```sh
./examples/render-aws-secrets.sh examples/addons.yaml | kubectl apply -f -
```

The manual deployment is:
5. **Provision AWS resource**

``` sh
$ echo -n 'your_id' | base64
$ echo -n 'your_key' | base64
$ kubectl apply -f examples/addons.yaml
```
The actuator expects existence of certain resource in AWS such as:
- vpc
- subnets
- security groups
- etc.

Deploy CRDs:
To create them, you can run:

```sh
$ kubectl apply -f config/crd/machine.crd.yaml
$ kubectl apply -f config/crd/machineset.crd.yaml
$ kubectl apply -f config/crd/machinedeployment.crd.yaml
$ kubectl apply -f config/crd/cluster.crd.yaml
```
```sh
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh install
```

Deploy machine API controllers:
To delete the resources, you can run:

```sh
$ kubectl apply -f config/rbac/rbac_role.yaml
$ kubectl apply -f config/rbac/rbac_role_binding.yaml
$ kubectl apply -f config/controllers/deployment.yaml
```
```sh
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh destroy
```

Deploy the cluster manifest:
```sh
$ kubectl apply -f examples/cluster.yaml
```
All machine manifests expect `ENVIRONMENT_ID` to be set to `aws-actuator-k8s`.

Deploy the machines:
## Test locally built aws actuator

```sh
$ kubectl apply -f examples/machine.yaml --validate=false
```
1. **Tear down machine-controller**

or alternatively:
Deployed machine API plane (`machine-api-controllers` deployment) is (among other
controllers) running `machine-controller`. In order to run locally built one,
simply edit `machine-api-controllers` deployment and remove `machine-controller` container from it.

```sh
$ kubectl apply -f examples/machine-set.yaml --validate=false
```
1. **Build and run aws actuator outside of the cluster**

```sh
$ go build -o bin/manager sigs.k8s.io/cluster-api-provider-aws/cmd/manager
```

```sh
$ ./bin/manager --kubeconfig ~/.kube/config --logtostderr -v 5 -alsologtostderr
```

1. **Deploy k8s apiserver through machine manifest**:

To deploy user data secret with kubernetes apiserver initialization (under [config/master-user-data-secret.yaml](config/master-user-data-secret.yaml)):

```yaml
$ kubectl apply -f config/master-user-data-secret.yaml
```

To deploy kubernetes master machine (under [config/master-machine.yaml](config/master-machine.yaml)):

```yaml
$ kubectl apply -f config/master-machine.yaml
```

1. **Pull kubeconfig from created master machine**

The master public IP can be accessed from AWS Portal. Once done, you
can collect the kube config by running:

```
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
```
Once done, you can access the cluster via `kubectl`. E.g.
```sh
$ kubectl --kubeconfig=kubeconfig get nodes
```

## Deploy k8s cluster in AWS with machine API plane deployed

1. **Generate bootstrap user data**

To generate bootstrap script for machine api plane, simply run:

```sh
$ ./config/generate-bootstrap.sh
```

The script requires `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables to be set.
It generates `config/bootstrap.yaml` secret for master machine
under `config/master-machine.yaml`.

The generated bootstrap secret contains user data responsible for:
- deployment of kube-apiserver
- deployment of machine API plane with aws machine controllers
- generating worker machine user data script secret deploying a node
- deployment of worker machineset

1. **Deploy machine API plane through machine manifest**:

First, deploy generated bootstrap secret:

```yaml
$ kubectl apply -f config/bootstrap.yaml
```

Then, deploy master machine (under [config/master-machine.yaml](config/master-machine.yaml)):

```yaml
$ kubectl apply -f config/master-machine.yaml
```

1. **Pull kubeconfig from created master machine**

The master public IP can be accessed from AWS Portal. Once done, you
can collect the kube config by running:

```
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
```

Once done, you can access the cluster via `kubectl`. E.g.

```sh
$ kubectl --kubeconfig=kubeconfig get nodes
```

# Upstream Implementation
Other branches of this repository may choose to track the upstream
Expand Down
155 changes: 155 additions & 0 deletions config/bootstrap.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
#!/bin/bash

cat <<HEREDOC > /root/user-data.sh
#!/bin/bash
################################################
######## Install packages and binaries
################################################
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
yum install -y docker
systemctl enable docker
systemctl start docker
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 kubernetes-cni-0.6.0-0 --disableexcludes=kubernetes
cat <<EOF > /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd
EOF
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases/latest |\
grep browser_download |\
grep linux |\
cut -d '"' -f 4 |\
xargs curl -O -L
chmod u+x kustomize_*_linux_amd64
sudo mv kustomize_*_linux_amd64 /usr/bin/kustomize
sudo yum install -y git
################################################
######## Deploy kubernetes master
################################################
kubeadm init --apiserver-bind-port 8443 --token 2iqzqm.85bs0x6miyx1nm7l --apiserver-cert-extra-sans=$(curl icanhazip.com) --pod-network-cidr=192.168.0.0/16 -v 6
# Enable networking by default.
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml --kubeconfig /etc/kubernetes/admin.conf
# Binaries expected under /opt/cni/bin are actually under /usr/libexec/cni
if [[ ! -e /opt/cni/bin ]]; then
mkdir -p /opt/cni/bin
cp /usr/libexec/cni/bridge /opt/cni/bin
cp /usr/libexec/cni/loopback /opt/cni/bin
cp /usr/libexec/cni/host-local /opt/cni/bin
fi
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
################################################
######## Deploy machine-api plane
################################################
git clone https://github.com/ingvagabund/cluster-api-provider-aws.git
cd cluster-api-provider-aws
git checkout k8s-bootstrap
cat <<EOF > secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials-secret
namespace: default
type: Opaque
data:
aws_access_key_id: FILLIN
aws_secret_access_key: FILLIN
EOF
sudo kubectl apply -f secret.yaml
kustomize build config | sudo kubectl apply -f -
kubectl apply -f config/master-user-data-secret.yaml
kubectl apply -f config/master-machine.yaml
################################################
######## generate worker machineset user data
################################################
cat <<WORKERSET > /root/workerset-user-data.sh
#!/bin/bash
cat <<WORKERHEREDOC > /root/workerset-user-data.sh
#!/bin/bash
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
yum install -y docker
systemctl enable docker
systemctl start docker
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubernetes-cni-0.6.0-0 --disableexcludes=kubernetes
cat <<EOF > /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd
EOF
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
kubeadm join $(curl icanhazip.com):8443 --token 2iqzqm.85bs0x6miyx1nm7l --discovery-token-unsafe-skip-ca-verification
WORKERHEREDOC
bash /root/workerset-user-data.sh 2>&1 > /root/workerset-user-data.logs
WORKERSET
################################################
######## deploy worker user data and machineset
################################################
# NOTE: The secret is rendered twice, the first time when it's run during bootstrapping.
# During bootstrapping, /root/workerset-user-data.sh does not exist yet.
# So \$ needs to be used so the command is executed the second time
# the script is executed.
cat <<EOF > /root/worker-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: worker-user-data-secret
namespace: default
type: Opaque
data:
userData: \$(cat /root/workerset-user-data.sh | base64 --w=0)
EOF
sudo kubectl apply -f /root/worker-secret.yaml
sudo kubectl apply -f config/worker-machineset.yaml
HEREDOC

bash /root/user-data.sh 2>&1 > /root/user-data.logs
Loading

0 comments on commit f123c8e

Please sign in to comment.