-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/types/config: Drop ParseConfig and other Parse* methods #403
pkg/types/config: Drop ParseConfig and other Parse* methods #403
Conversation
/lgtm |
/retest Please review the full test history for this PR and help us cut down flakes. |
e2e:
|
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
From the current e2e run and the OpenShift console: $ export KUBECONFIG=/tmp/artifacts/installer/auth/kubeconfig
$ kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-apiserver-5h22s 1/1 Running 0 32m
kube-system kube-apiserver-hp9pl 1/1 Running 0 32m
kube-system kube-apiserver-zswsp 1/1 Running 0 32m
kube-system kube-controller-manager-f7574c6fc-lq54q 1/1 Running 0 33m
kube-system kube-core-operator-854857854d-ks4tg 1/1 Running 0 29m
kube-system kube-dns-787c975867-khg9l 3/3 Running 0 32m
kube-system kube-flannel-krsjx 2/2 Running 0 31m
kube-system kube-flannel-pnd7m 2/2 Running 2 21m
kube-system kube-flannel-q6kg9 2/2 Running 0 31m
kube-system kube-flannel-tfg5b 2/2 Running 0 31m
kube-system kube-flannel-tjqjg 2/2 Running 0 31m
kube-system kube-proxy-9tn44 1/1 Running 0 32m
kube-system kube-proxy-gbmnw 1/1 Running 0 21m
kube-system kube-proxy-r72bz 1/1 Running 0 33m
kube-system kube-proxy-rxhf2 1/1 Running 0 32m
kube-system kube-proxy-v9b84 1/1 Running 0 32m
kube-system kube-scheduler-78d86f9754-ch2wc 1/1 Running 0 32m
kube-system openshift-apiserver-lvwb7 1/1 Running 0 32m
kube-system openshift-apiserver-pt4x8 1/1 Running 0 32m
kube-system openshift-apiserver-t7qjh 1/1 Running 0 32m
kube-system openshift-controller-manager-d9787b9c-pdcpw 1/1 Running 0 33m
kube-system pod-checkpointer-gqf2z 1/1 Running 0 32m
kube-system pod-checkpointer-gqf2z-ip-10-0-16-249.ec2.internal 1/1 Running 0 32m
kube-system pod-checkpointer-nggdv 1/1 Running 0 33m
kube-system pod-checkpointer-nggdv-ip-10-0-8-240.ec2.internal 1/1 Running 0 32m
kube-system pod-checkpointer-w7k4x 1/1 Running 0 33m
kube-system pod-checkpointer-w7k4x-ip-10-0-38-177.ec2.internal 1/1 Running 0 32m
kube-system tectonic-network-operator-9z2fm 1/1 Running 0 32m
kube-system tectonic-network-operator-n288n 1/1 Running 0 32m
kube-system tectonic-network-operator-r2jn5 1/1 Running 0 32m
openshift-apiserver apiserver-k5t8g 0/1 ContainerCreating 0 12s
openshift-apiserver apiserver-k5w6g 0/1 ContainerCreating 0 12s
openshift-apiserver apiserver-p5ts2 0/1 ContainerCreating 0 12s
openshift-cluster-api clusterapi-apiserver-6b855f7bc5-l555n 2/2 Running 1 30m
openshift-cluster-api clusterapi-controllers-797f4d6967-vzplg 2/2 Running 0 29m
openshift-cluster-api machine-api-operator-5d85454676-ch28t 1/1 Running 0 31m
openshift-cluster-dns-operator cluster-dns-operator-6c4c47c596-hmtd7 1/1 Running 0 29m
openshift-cluster-ingress-operator cluster-ingress-operator-d5789858b-cccj4 1/1 Running 0 29m
openshift-cluster-network-operator cluster-network-operator-574998b758-87bps 1/1 Running 0 31m
openshift-cluster-samples-operator cluster-samples-operator-67788bc449-n24wn 1/1 Running 0 29m
openshift-cluster-version bootstrap-cluster-version-operator-ip-10-0-2-234.ec2.internal 1/1 Running 1 32m
openshift-cluster-version cluster-version-operator-54d6878787-rvpp5 1/1 Running 0 32m
openshift-controller-manager controller-manager-7t8ds 0/1 ContainerCreating 0 30m
openshift-controller-manager controller-manager-pdxkx 0/1 ContainerCreating 0 30m
openshift-controller-manager controller-manager-v9vb5 0/1 ContainerCreating 0 30m
openshift-core-operators openshift-cluster-kube-apiserver-operator-6dbbc8db87-v22tr 1/1 Running 0 31m
openshift-core-operators openshift-cluster-kube-controller-manager-operator-66d84c4sfjzd 1/1 Running 0 31m
openshift-core-operators openshift-cluster-kube-scheduler-operator-5b47cbf44d-xt5dt 1/1 Running 0 31m
openshift-core-operators openshift-cluster-openshift-apiserver-operator-58c696bfd9-szm2q 1/1 Running 0 31m
openshift-core-operators openshift-cluster-openshift-controller-manager-operator-6csgz6j 1/1 Running 0 31m
openshift-core-operators openshift-service-cert-signer-operator-66fcb486c8-hdwn2 1/1 Running 0 31m
openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9 0/1 CrashLoopBackOff 8 29m
openshift-ingress tectonic-ingress-controller-operator-599fdd5cff-mp798 1/1 Running 0 29m
openshift-kube-apiserver apiserver-b4c7cdc89-cv9xc 0/1 ContainerCreating 0 30m
openshift-kube-scheduler scheduler-fc69c9cd9-bjkvv 1/1 Running 0 30m
openshift-machine-config-operator machine-config-controller-5d9f77f479-mzgct 1/1 Running 0 28m
openshift-machine-config-operator machine-config-daemon-5l5vq 1/1 Running 0 26m
openshift-machine-config-operator machine-config-daemon-cjj4x 1/1 Running 0 26m
openshift-machine-config-operator machine-config-daemon-mbnzk 1/1 Running 0 21m
openshift-machine-config-operator machine-config-daemon-zsqs6 1/1 Running 0 26m
openshift-machine-config-operator machine-config-operator-77d88bf865-rl8tb 1/1 Running 0 31m
openshift-machine-config-operator machine-config-server-jbfv6 1/1 Running 0 27m
openshift-machine-config-operator machine-config-server-lxtjw 1/1 Running 0 27m
openshift-machine-config-operator machine-config-server-tjxqr 1/1 Running 0 27m
openshift-monitoring cluster-monitoring-operator-6b77bf9bd6-hnp66 1/1 Running 0 29m
openshift-monitoring prometheus-operator-5bf8644c75-fldg2 1/1 Running 0 18m
openshift-service-cert-signer apiservice-cabundle-injector-696b9d4c8f-rd5d7 1/1 Running 0 30m
openshift-service-cert-signer configmap-cabundle-injector-7b94c9949d-8r9h9 1/1 Running 0 30m
openshift-service-cert-signer service-serving-cert-signer-564c6b47b7-zm54g 1/1 Running 0 30m
tectonic-system kube-addon-operator-775d4c8f8d-frkqk 0/1 ImagePullBackOff 0 29m
$ kubectl logs -n openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9
time="2018-10-03T23:09:12Z" level=info msg="Cluster Image Registry Operator Version: c2753e9-dirty"
time="2018-10-03T23:09:12Z" level=info msg="Go Version: go1.10.3"
time="2018-10-03T23:09:12Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-10-03T23:09:12Z" level=info msg="operator-sdk Version: 0.0.6+git"
E1003 23:09:12.518994 1 memcache.go:153] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.521719 1 memcache.go:153] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.523561 1 memcache.go:153] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.525691 1 memcache.go:153] couldn't get resource list for image.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.535615 1 memcache.go:153] couldn't get resource list for network.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.538190 1 memcache.go:153] couldn't get resource list for oauth.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.662266 1 memcache.go:153] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.683553 1 memcache.go:153] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.703499 1 memcache.go:153] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.717847 1 memcache.go:153] couldn't get resource list for security.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.719404 1 memcache.go:153] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.720938 1 memcache.go:153] couldn't get resource list for user.openshift.io/v1: the server is currently unable to handle the request
time="2018-10-03T23:09:13Z" level=info msg="Metrics service cluster-image-registry-operator created"
time="2018-10-03T23:09:13Z" level=info msg="Watching rbac.authorization.k8s.io/v1, ClusterRole, , 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching rbac.authorization.k8s.io/v1, ClusterRoleBinding, , 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, ConfigMap, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, Secret, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, ServiceAccount, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching route.openshift.io/v1, Route, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=error msg="failed to get resource client for (apiVersion:route.openshift.io/v1, kind:Route, ns:openshift-image-registry): failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(route.openshift.io/v1, Kind=Route): no matches for kind \"Route\" in version \"route.openshift.io/v1\""
panic: failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(route.openshift.io/v1, Kind=Route): no matches for kind "Route" in version "route.openshift.io/v1"
goroutine 1 [running]:
github.com/openshift/cluster-image-registry-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.Watch(0xc4206786a0, 0x15, 0x133c5d9, 0x5, 0xc420040040, 0x18, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/api.go:49 +0x4a8
main.watch(0xc4206786a0, 0x15, 0x133c5d9, 0x5, 0xc420040040, 0x18, 0x0)
/go/src/github.com/openshift/cluster-image-registry-operator/cmd/cluster-image-registry-operator/main.go:39 +0x228
main.main()
/go/src/github.com/openshift/cluster-image-registry-operator/cmd/cluster-image-registry-operator/main.go:82 +0x496 |
Looking into the "creating" issues from above: $ kubectl logs -f -n tectonic-system kube-addon-operator-775d4c8f8d-frkqk
Error from server (BadRequest): container "kube-addon-operator" in pod "kube-addon-operator-775d4c8f8d-frkqk" is waiting to start: trying and failing to pull image Dunno about that. Checking the pods again, a number of the API servers are terminating: $ kubectl get --all-namespaces pods | grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver apiserver-4zwpm 0/1 Terminating 0 15s
openshift-apiserver apiserver-tbt7p 0/1 Terminating 0 15s
openshift-apiserver apiserver-zphpx 0/1 Terminating 0 15s
openshift-controller-manager controller-manager-7t8ds 0/1 ContainerCreating 0 43m
openshift-controller-manager controller-manager-pdxkx 0/1 ContainerCreating 0 43m
openshift-controller-manager controller-manager-v9vb5 0/1 ContainerCreating 0 43m
openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9 0/1 CrashLoopBackOff 11 42m
openshift-kube-apiserver apiserver-b4c7cdc89-cv9xc 0/1 ContainerCreating 0 43m
tectonic-system kube-addon-operator-775d4c8f8d-frkqk 0/1 ImagePullBackOff 0 43m |
API servers under $ kubectl get --all-namespaces pods | grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver apiserver-5fkvk 0/1 ContainerCreating 0 2s
openshift-apiserver apiserver-j5f2j 0/1 ContainerCreating 0 2s
openshift-apiserver apiserver-v8g7t 0/1 ContainerCreating 0 2s
openshift-controller-manager controller-manager-7t8ds 0/1 ContainerCreating 0 56m
openshift-controller-manager controller-manager-pdxkx 0/1 ContainerCreating 0 56m
openshift-controller-manager controller-manager-v9vb5 0/1 ContainerCreating 0 56m
openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9 0/1 CrashLoopBackOff 13 55m
openshift-kube-apiserver apiserver-b4c7cdc89-cv9xc 0/1 ContainerCreating 0 55m
tectonic-system kube-addon-operator-775d4c8f8d-frkqk 0/1 ImagePullBackOff 0 55m
$ kubectl get --all-namespaces pods | grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver apiserver-5fkvk 0/1 Terminating 0 14s
openshift-apiserver apiserver-v8g7t 0/1 Terminating 0 14s
openshift-controller-manager controller-manager-7t8ds 0/1 ContainerCreating 0 56m
openshift-controller-manager controller-manager-pdxkx 0/1 ContainerCreating 0 56m
openshift-controller-manager controller-manager-v9vb5 0/1 ContainerCreating 0 56m
openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9 0/1 CrashLoopBackOff 13 55m
openshift-kube-apiserver apiserver-b4c7cdc89-cv9xc 0/1 ContainerCreating 0 55m
tectonic-system kube-addon-operator-775d4c8f8d-frkqk 0/1 ImagePullBackOff 0 55m
$ kubectl get --all-namespaces pods | grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver apiserver-46657 0/1 Terminating 0 16s
openshift-apiserver apiserver-c5g2k 0/1 Terminating 0 16s
openshift-apiserver apiserver-nvcfz 0/1 Terminating 0 16s
openshift-controller-manager controller-manager-7t8ds 0/1 ContainerCreating 0 57m
openshift-controller-manager controller-manager-pdxkx 0/1 ContainerCreating 0 57m
openshift-controller-manager controller-manager-v9vb5 0/1 ContainerCreating 0 57m
openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9 0/1 CrashLoopBackOff 13 56m
openshift-kube-apiserver apiserver-b4c7cdc89-cv9xc 0/1 ContainerCreating 0 57m
tectonic-system kube-addon-operator-775d4c8f8d-frkqk 0/1 ImagePullBackOff 0 57m |
Wandering around aimlessly: $ kubectl get nodes -o yaml | grep '\sname:\|cpu\|memory'
name: ip-10-0-136-219.ec2.internal
cpu: "2"
memory: 8070820Ki
cpu: "2"
memory: 8173220Ki
message: kubelet has sufficient memory available
name: ip-10-0-16-249.ec2.internal
cpu: "2"
memory: 3942040Ki
cpu: "2"
memory: 4044440Ki
message: kubelet has sufficient memory available
name: ip-10-0-2-234.ec2.internal
cpu: "2"
memory: 3942040Ki
cpu: "2"
memory: 4044440Ki
message: kubelet has sufficient memory available
name: ip-10-0-38-177.ec2.internal
cpu: "2"
memory: 3942040Ki
cpu: "2"
memory: 4044440Ki
message: kubelet has sufficient memory available
name: ip-10-0-8-240.ec2.internal
cpu: "2"
memory: 3942040Ki
cpu: "2"
memory: 4044440Ki
message: kubelet has sufficient memory available |
$ kubectl describe pods -n openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9
Name: cluster-image-registry-operator-869c995bc5-ccrn9
Namespace: openshift-image-registry
Priority: 0
PriorityClassName: <none>
Node: ip-10-0-136-219.ec2.internal/10.0.136.219
Start Time: Wed, 03 Oct 2018 22:45:47 +0000
Labels: name=cluster-image-registry-operator
pod-template-hash=4257551671
Annotations: openshift.io/scc=restricted
Status: Running
IP: 10.2.4.6
Controlled By: ReplicaSet/cluster-image-registry-operator-869c995bc5
Containers:
cluster-image-registry-operator:
Container ID: cri-o://1bd15a65433c6f0cf3674fdf522bd7355c4e42741a7efccfa328fda1fea63ed2
Image: registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2
Image ID: registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2
Port: 60000/TCP
Host Port: 0/TCP
Command:
cluster-image-registry-operator
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 03 Oct 2018 23:40:11 +0000
Finished: Wed, 03 Oct 2018 23:40:11 +0000
Ready: False
Restart Count: 15
Environment:
WATCH_NAMESPACE: openshift-image-registry (v1:metadata.namespace)
OPERATOR_NAME: cluster-image-registry-operator
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6p5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-6p6p5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6p6p5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58m (x310 over 1h) default-scheduler 0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.
Warning FailedCreatePodSandBox 55m kubelet, ip-10-0-136-219.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(bd2b553ae930d9afea620ac3bc9401828ae7a577b0637f77739324638d3d414e): open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 55m kubelet, ip-10-0-136-219.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(d27b30b91af951d2b96fcb67b9316b6c820ae1b0d8cd2681bcb2153cba249315): open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 54m kubelet, ip-10-0-136-219.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(dabf597aba822b351b50772943de49434d51ba26367877881a6a4d03c3b3c88a): open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 54m kubelet, ip-10-0-136-219.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(5286b6bd6666d5ea8bd1d0e79dd35110598dd46b0f1c6488cbe7738312c71386): open /run/flannel/subnet.env: no such file or directory
Normal Pulling 52m (x4 over 54m) kubelet, ip-10-0-136-219.ec2.internal pulling image "registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2"
Normal Pulled 52m (x4 over 54m) kubelet, ip-10-0-136-219.ec2.internal Successfully pulled image "registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2"
Normal Created 52m (x4 over 54m) kubelet, ip-10-0-136-219.ec2.internal Created container
Normal Started 52m (x4 over 54m) kubelet, ip-10-0-136-219.ec2.internal Started container
Warning BackOff 21s (x242 over 53m) kubelet, ip-10-0-136-219.ec2.internal Back-off restarting failed container |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
With openshift-install, the config type is a one-way map from InstallConfig to Terraform, so we can drop these methods. The last consumers were removed in b6c0d8c (installer: remove package, 2018-09-26, openshift#342).
33da58b
to
f7a4e68
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Workers failed to come up. Might be the dns issue with flannel and kube dns. /test e2e-aws |
We haven't needed these since we dropped the parsers in f7a4e68 (pkg/types/config: Drop ParseConfig and other Parse* methods, 2018-10-02, openshift#403). Generated with: $ sed -i 's/ yaml:.*/`/' $(git grep -l yaml pkg/tfvars)
This functionality was originally from 8324c21 (AWS: VPC subnets with custom CIDRs and AZs per workers / masters, 2017-04-20, coreos/tectonic-installer#267), but we haven't exposed it in openshift-install (which has never used the parsers removed by f7a4e68, pkg/types/config: Drop ParseConfig and other Parse* methods, 2018-10-02, openshift#403). Currently we are unable to scale masters post-install, because auto-scaling etcd is difficult. Depending on how long that takes us to get working, we may need to re-enable this for masters later. Workers are already managable via the cluster API and MachineSets, so folks who need custom worker subnets can create a cluster without workers and then launch their worker machine-sets directly as a day-2 operation. The cluster-API type chain is: * MachineSet.Spec [1] * MachineSetSpec.Template [2] * MachineTemplateSpec.Spec [3] * MachineSpec.ProviderConfig [4] * ProviderConfig.Value [5] * RawExtension which is nice and generic, but a dead-end for structured configuration ;). Jumping over to the OpenShift AWS provider, there is an AWSMachineProviderConfig.Subnet [6]. I don't see code for auto-creating those subnets, but an admin could manually create the subnet wherever they wanted and then use the cluster API to launch new workers into that subnet. And maybe there will be generic tooling to automate that subnet creation (setting up routing, etc.) to make that less tedious/error-prone. Also in this space, see [7,8] [1]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L42 [2]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L68-L71 [3]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L84-L87 [4]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machine_types.go#L62-L64 [5]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/common_types.go#L29-L34 [6]: https://github.com/openshift/cluster-api-provider-aws/blob/e6986093d1fbac2084c50b04fe2f78125ffca582/pkg/apis/awsproviderconfig/v1alpha1/awsmachineproviderconfig_types.go#L130-L131 [7]: kubernetes/kops#1333 [8]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/cluster_types.go#L62-L82
With
openshift-install
, the config type is a one-way map fromInstallConfig
to Terraform, so we can drop these methods. The last consumers were removed in b6c0d8c (#342).