Skip to content

Commit

Permalink
Merge pull request #1216 from CecileRobertMichon/update-external-cp
Browse files Browse the repository at this point in the history
Update external cloud provider flavor to use CRS and add test
  • Loading branch information
k8s-ci-robot authored Mar 16, 2021
2 parents 89e0237 + e1149af commit 289f521
Show file tree
Hide file tree
Showing 15 changed files with 1,067 additions and 29 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ GO_APIDIFF_VER := latest
GO_APIDIFF_BIN := go-apidiff
GO_APIDIFF := $(TOOLS_BIN_DIR)/$(GO_APIDIFF_BIN)

GINKGO_VER := v1.15.1
GINKGO_VER := v1.14.2
GINKGO_BIN := ginkgo
GINKGO := $(TOOLS_BIN_DIR)/$(GINKGO_BIN)-$(GINKGO_VER)

Expand Down
12 changes: 0 additions & 12 deletions docs/book/src/topics/external-cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,6 @@

To deploy a cluster using [external cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), create a cluster configuration with the [external cloud provider template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/master/templates/cluster-template-external-cloud-provider.yaml).

After control plane is up and running, deploy external cloud provider components (`cloud-controller-manager` and `cloud-node-manager`) using:

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig \
apply -f templates/addons/cloud-controller-manager.yaml
```

```bash
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig \
apply -f templates/addons/cloud-node-manager.yaml
```

After components are deployed, you should see following pods in `Running` state:

```bash
Expand Down
13 changes: 13 additions & 0 deletions docs/topics/kubernetes-developers.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,3 +110,16 @@ spec:


Finally, deploy your manifests and check that your images where deployed by connecting to the workload cluster and running `kubectl describe -n kube-system <kube-controller-manager-pod-id>`.

## Testing the out-of-tree cloud provider

To test changes made to the [Azure cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), first build and push images for cloud-controller-manager and/or cloud-node-manager from the root of the cloud-provider-azure repo.

Then, use the `external-cloud-provider` flavor to create a cluster:

```bash
AZURE_CLOUD_CONTROLLER_MANAGER_IMG=myrepo/my-ccm:v0.0.1 \
AZURE_CLOUD_NODE_MANAGER_IMG=myrepo/my-cnm:v0.0.1 \
CLUSTER_TEMPLATE=cluster-template-external-cloud-provider.yaml \
make create-workload-cluster
```
336 changes: 336 additions & 0 deletions templates/cluster-template-external-cloud-provider.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
labels:
ccm: external
cni: calico
name: ${CLUSTER_NAME}
namespace: default
Expand Down Expand Up @@ -197,3 +198,338 @@ spec:
cloud-provider: external
name: '{{ ds.meta_data["local_hostname"] }}'
useExperimentalRetryJoin: true
---
apiVersion: addons.cluster.x-k8s.io/v1alpha4
kind: ClusterResourceSet
metadata:
name: crs-ccm
namespace: default
spec:
clusterSelector:
matchLabels:
ccm: external
resources:
- kind: ConfigMap
name: cloud-controller-manager-addon
strategy: ApplyOnce
---
apiVersion: addons.cluster.x-k8s.io/v1alpha4
kind: ClusterResourceSet
metadata:
name: crs-node-manager
namespace: default
spec:
clusterSelector:
matchLabels:
ccm: external
resources:
- kind: ConfigMap
name: cloud-node-manager-addon
strategy: ApplyOnce
---
apiVersion: v1
data:
cloud-controller-manager.yaml: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-controller-manager
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
k8s-app: cloud-controller-manager
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- "*"
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- services
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services/status
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- update
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-controller-manager
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
- kind: User
name: cloud-controller-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:cloud-controller-manager:extension-apiserver-authentication-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
- apiGroup: ""
kind: User
name: cloud-controller-manager
---
apiVersion: v1
kind: Pod
metadata:
name: cloud-controller-manager
namespace: kube-system
labels:
tier: control-plane
component: cloud-controller-manager
spec:
priorityClassName: system-node-critical
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
serviceAccountName: cloud-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: cloud-controller-manager
image: ${AZURE_CLOUD_CONTROLLER_MANAGER_IMG:=mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v0.7.2}
imagePullPolicy: IfNotPresent
command: ["cloud-controller-manager"]
args:
- "--allocate-node-cidrs=true"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--cloud-provider=azure"
- "--cluster-cidr=10.244.0.0/16"
- "--cluster-name=${CLUSTER_NAME}"
- "--controllers=*,-cloud-node" # disable cloud-node controller
- "--configure-cloud-routes=true" # "false" for Azure CNI and "true" for other network plugins
- "--leader-elect=true"
- "--route-reconciliation-period=10s"
- "--v=2"
- "--port=10267"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: "4"
memory: 2Gi
livenessProbe:
httpGet:
path: /healthz
port: 10267
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: etc-kubernetes
mountPath: /etc/kubernetes
- name: etc-ssl
mountPath: /etc/ssl
readOnly: true
- name: msi
mountPath: /var/lib/waagent/ManagedIdentity-Settings
readOnly: true
volumes:
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: etc-ssl
hostPath:
path: /etc/ssl
- name: msi
hostPath:
path: /var/lib/waagent/ManagedIdentity-Settings
kind: ConfigMap
metadata:
annotations:
note: generated
labels:
type: generated
name: cloud-controller-manager-addon
namespace: default
---
apiVersion: v1
data:
cloud-node-manager.yaml: |
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: cloud-node-manager
name: cloud-node-manager
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cloud-node-manager
labels:
k8s-app: cloud-node-manager
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "list", "get", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cloud-node-manager
labels:
k8s-app: cloud-node-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cloud-node-manager
subjects:
- kind: ServiceAccount
name: cloud-node-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cloud-node-manager
namespace: kube-system
labels:
component: cloud-node-manager
spec:
selector:
matchLabels:
k8s-app: cloud-node-manager
template:
metadata:
labels:
k8s-app: cloud-node-manager
annotations:
cluster-autoscaler.kubernetes.io/daemonset-pod: "true"
spec:
priorityClassName: system-node-critical
serviceAccountName: cloud-node-manager
hostNetwork: true # required to fetch correct hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
- operator: "Exists"
effect: NoExecute
- operator: "Exists"
effect: NoSchedule
containers:
- name: cloud-node-manager
image: ${AZURE_CLOUD_NODE_MANAGER_IMG:=mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v0.7.2}
imagePullPolicy: IfNotPresent
command:
- cloud-node-manager
- --node-name=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
cpu: 2000m
memory: 512Mi
kind: ConfigMap
metadata:
annotations:
note: generated
labels:
type: generated
name: cloud-node-manager-addon
namespace: default
Loading

0 comments on commit 289f521

Please sign in to comment.