Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pkg/types/config: Drop ParseConfig and other Parse* methods #403

Merged
merged 1 commit into from
Oct 5, 2018

Conversation

wking
Copy link
Member

@wking wking commented Oct 3, 2018

With openshift-install, the config type is a one-way map from InstallConfig to Terraform, so we can drop these methods. The last consumers were removed in b6c0d8c (#342).

@openshift-ci-robot openshift-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Oct 3, 2018
@abhinavdahiya
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 3, 2018
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@wking
Copy link
Member Author

wking commented Oct 3, 2018

e2e:

Found router in openshift-ingress
error: .status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
error deploy/router did not come up
...
Oct  3 19:43:29.198: INFO: About to run a Kube e2e test, ensuring namespace is privileged
goroutine 82 [running]:
runtime/debug.Stack(0x0, 0xc421448440, 0xc421c62e38)
	/usr/local/go/src/runtime/debug/stack.go:24 +0xa7
github.com/openshift/origin/test/extended/util.FatalErr(0x3e41420, 0xc4222fef30)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:664 +0x26
github.com/openshift/origin/test/extended/util.addE2EServiceAccountsToSCC(0x4b17ba0, 0xc421e82600, 0xc4220ac500, 0x1, 0x1, 0x4629e8a, 0xa)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:449 +0x111
github.com/openshift/origin/test/extended/util.createTestingNS(0x4621d79, 0x7, 0x4bc94a0, 0xc422234c30, 0xc420767050, 0x44704a0, 0x78124b9b1ec0ad55, 0xc420767050)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:228 +0x241
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*Framework).CreateNamespace(0xc4216fba40, 0x4621d79, 0x7, 0xc420767050, 0xc4205ce508, 0x0, 0xc4204b7901)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:400 +0x73
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc4216fba40)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:214 +0x79f
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach-fm()
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 +0x2a
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4217d03c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:109 +0x9c
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc4217d03c0, 0xc4200e1660, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0x13e
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc420aa7048, 0x4b0c800, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:14 +0x7f
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc421572680, 0x0, 0x4b0c800, 0xc4200d2de0)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:158 +0x1e0
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc421572680, 0x4b0c800, 0xc4200d2de0)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:127 +0xe3
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc42149b400, 0xc421572680, 0x0)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:198 +0x10d
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc42149b400, 0x483ee01)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:168 +0x32c
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc42149b400, 0x8)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:64 +0xdc
github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200de2d0, 0x7f73c6d07e10, 0xc422234b40, 0x4622aec, 0x8, 0xc422274480, 0x2, 0x2, 0x4b6bde0, 0xc4200d2de0, ...)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c
github.com/openshift/origin/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x4b10c40, 0xc422234b40, 0x4622aec, 0x8, 0xc422274460, 0x2, 0x2, 0x1)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:222 +0x253
github.com/openshift/origin/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x4b10c40, 0xc422234b40, 0x4622aec, 0x8, 0xc42203a380, 0x1, 0x1, 0x1)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:210 +0x129
github.com/openshift/origin/test/extended/util.ExecuteTest(0xc422234b40, 0x4622aec, 0x8)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:162 +0x5b5
github.com/openshift/origin/test/extended.TestExtended(0xc422234b40)
	/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/extended_test.go:54 +0x40
testing.tRunner(0xc422234b40, 0x483c768)
	/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:824 +0x2e0
...
• Failure in Spec Setup (BeforeEach) [2.054 seconds]
[sig-storage] Secrets
/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] [BeforeEach]
  /tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684

  Oct  3 19:43:29.214: the server is currently unable to handle the request (get securitycontextconstraints.security.openshift.io privileged)

  /tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:665
...
Summarizing 1 Failure:

[Fail] [sig-storage] Secrets [BeforeEach] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 
/tmp/openshift/build-rpms/rpm/BUILD/origin-4.0.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:665

@wking
Copy link
Member Author

wking commented Oct 3, 2018

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@wking
Copy link
Member Author

wking commented Oct 3, 2018

From the current e2e run and the OpenShift console:

$ export KUBECONFIG=/tmp/artifacts/installer/auth/kubeconfig
$ kubectl get --all-namespaces pods
NAMESPACE                            NAME                                                              READY     STATUS              RESTARTS   AGE
kube-system                          kube-apiserver-5h22s                                              1/1       Running             0          32m
kube-system                          kube-apiserver-hp9pl                                              1/1       Running             0          32m
kube-system                          kube-apiserver-zswsp                                              1/1       Running             0          32m
kube-system                          kube-controller-manager-f7574c6fc-lq54q                           1/1       Running             0          33m
kube-system                          kube-core-operator-854857854d-ks4tg                               1/1       Running             0          29m
kube-system                          kube-dns-787c975867-khg9l                                         3/3       Running             0          32m
kube-system                          kube-flannel-krsjx                                                2/2       Running             0          31m
kube-system                          kube-flannel-pnd7m                                                2/2       Running             2          21m
kube-system                          kube-flannel-q6kg9                                                2/2       Running             0          31m
kube-system                          kube-flannel-tfg5b                                                2/2       Running             0          31m
kube-system                          kube-flannel-tjqjg                                                2/2       Running             0          31m
kube-system                          kube-proxy-9tn44                                                  1/1       Running             0          32m
kube-system                          kube-proxy-gbmnw                                                  1/1       Running             0          21m
kube-system                          kube-proxy-r72bz                                                  1/1       Running             0          33m
kube-system                          kube-proxy-rxhf2                                                  1/1       Running             0          32m
kube-system                          kube-proxy-v9b84                                                  1/1       Running             0          32m
kube-system                          kube-scheduler-78d86f9754-ch2wc                                   1/1       Running             0          32m
kube-system                          openshift-apiserver-lvwb7                                         1/1       Running             0          32m
kube-system                          openshift-apiserver-pt4x8                                         1/1       Running             0          32m
kube-system                          openshift-apiserver-t7qjh                                         1/1       Running             0          32m
kube-system                          openshift-controller-manager-d9787b9c-pdcpw                       1/1       Running             0          33m
kube-system                          pod-checkpointer-gqf2z                                            1/1       Running             0          32m
kube-system                          pod-checkpointer-gqf2z-ip-10-0-16-249.ec2.internal                1/1       Running             0          32m
kube-system                          pod-checkpointer-nggdv                                            1/1       Running             0          33m
kube-system                          pod-checkpointer-nggdv-ip-10-0-8-240.ec2.internal                 1/1       Running             0          32m
kube-system                          pod-checkpointer-w7k4x                                            1/1       Running             0          33m
kube-system                          pod-checkpointer-w7k4x-ip-10-0-38-177.ec2.internal                1/1       Running             0          32m
kube-system                          tectonic-network-operator-9z2fm                                   1/1       Running             0          32m
kube-system                          tectonic-network-operator-n288n                                   1/1       Running             0          32m
kube-system                          tectonic-network-operator-r2jn5                                   1/1       Running             0          32m
openshift-apiserver                  apiserver-k5t8g                                                   0/1       ContainerCreating   0          12s
openshift-apiserver                  apiserver-k5w6g                                                   0/1       ContainerCreating   0          12s
openshift-apiserver                  apiserver-p5ts2                                                   0/1       ContainerCreating   0          12s
openshift-cluster-api                clusterapi-apiserver-6b855f7bc5-l555n                             2/2       Running             1          30m
openshift-cluster-api                clusterapi-controllers-797f4d6967-vzplg                           2/2       Running             0          29m
openshift-cluster-api                machine-api-operator-5d85454676-ch28t                             1/1       Running             0          31m
openshift-cluster-dns-operator       cluster-dns-operator-6c4c47c596-hmtd7                             1/1       Running             0          29m
openshift-cluster-ingress-operator   cluster-ingress-operator-d5789858b-cccj4                          1/1       Running             0          29m
openshift-cluster-network-operator   cluster-network-operator-574998b758-87bps                         1/1       Running             0          31m
openshift-cluster-samples-operator   cluster-samples-operator-67788bc449-n24wn                         1/1       Running             0          29m
openshift-cluster-version            bootstrap-cluster-version-operator-ip-10-0-2-234.ec2.internal     1/1       Running             1          32m
openshift-cluster-version            cluster-version-operator-54d6878787-rvpp5                         1/1       Running             0          32m
openshift-controller-manager         controller-manager-7t8ds                                          0/1       ContainerCreating   0          30m
openshift-controller-manager         controller-manager-pdxkx                                          0/1       ContainerCreating   0          30m
openshift-controller-manager         controller-manager-v9vb5                                          0/1       ContainerCreating   0          30m
openshift-core-operators             openshift-cluster-kube-apiserver-operator-6dbbc8db87-v22tr        1/1       Running             0          31m
openshift-core-operators             openshift-cluster-kube-controller-manager-operator-66d84c4sfjzd   1/1       Running             0          31m
openshift-core-operators             openshift-cluster-kube-scheduler-operator-5b47cbf44d-xt5dt        1/1       Running             0          31m
openshift-core-operators             openshift-cluster-openshift-apiserver-operator-58c696bfd9-szm2q   1/1       Running             0          31m
openshift-core-operators             openshift-cluster-openshift-controller-manager-operator-6csgz6j   1/1       Running             0          31m
openshift-core-operators             openshift-service-cert-signer-operator-66fcb486c8-hdwn2           1/1       Running             0          31m
openshift-image-registry             cluster-image-registry-operator-869c995bc5-ccrn9                  0/1       CrashLoopBackOff    8          29m
openshift-ingress                    tectonic-ingress-controller-operator-599fdd5cff-mp798             1/1       Running             0          29m
openshift-kube-apiserver             apiserver-b4c7cdc89-cv9xc                                         0/1       ContainerCreating   0          30m
openshift-kube-scheduler             scheduler-fc69c9cd9-bjkvv                                         1/1       Running             0          30m
openshift-machine-config-operator    machine-config-controller-5d9f77f479-mzgct                        1/1       Running             0          28m
openshift-machine-config-operator    machine-config-daemon-5l5vq                                       1/1       Running             0          26m
openshift-machine-config-operator    machine-config-daemon-cjj4x                                       1/1       Running             0          26m
openshift-machine-config-operator    machine-config-daemon-mbnzk                                       1/1       Running             0          21m
openshift-machine-config-operator    machine-config-daemon-zsqs6                                       1/1       Running             0          26m
openshift-machine-config-operator    machine-config-operator-77d88bf865-rl8tb                          1/1       Running             0          31m
openshift-machine-config-operator    machine-config-server-jbfv6                                       1/1       Running             0          27m
openshift-machine-config-operator    machine-config-server-lxtjw                                       1/1       Running             0          27m
openshift-machine-config-operator    machine-config-server-tjxqr                                       1/1       Running             0          27m
openshift-monitoring                 cluster-monitoring-operator-6b77bf9bd6-hnp66                      1/1       Running             0          29m
openshift-monitoring                 prometheus-operator-5bf8644c75-fldg2                              1/1       Running             0          18m
openshift-service-cert-signer        apiservice-cabundle-injector-696b9d4c8f-rd5d7                     1/1       Running             0          30m
openshift-service-cert-signer        configmap-cabundle-injector-7b94c9949d-8r9h9                      1/1       Running             0          30m
openshift-service-cert-signer        service-serving-cert-signer-564c6b47b7-zm54g                      1/1       Running             0          30m
tectonic-system                      kube-addon-operator-775d4c8f8d-frkqk                              0/1       ImagePullBackOff    0          29m
$ kubectl logs -n openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9
time="2018-10-03T23:09:12Z" level=info msg="Cluster Image Registry Operator Version: c2753e9-dirty"
time="2018-10-03T23:09:12Z" level=info msg="Go Version: go1.10.3"
time="2018-10-03T23:09:12Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-10-03T23:09:12Z" level=info msg="operator-sdk Version: 0.0.6+git"
E1003 23:09:12.518994       1 memcache.go:153] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.521719       1 memcache.go:153] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.523561       1 memcache.go:153] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.525691       1 memcache.go:153] couldn't get resource list for image.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.535615       1 memcache.go:153] couldn't get resource list for network.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.538190       1 memcache.go:153] couldn't get resource list for oauth.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.662266       1 memcache.go:153] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.683553       1 memcache.go:153] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.703499       1 memcache.go:153] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.717847       1 memcache.go:153] couldn't get resource list for security.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.719404       1 memcache.go:153] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request
E1003 23:09:12.720938       1 memcache.go:153] couldn't get resource list for user.openshift.io/v1: the server is currently unable to handle the request
time="2018-10-03T23:09:13Z" level=info msg="Metrics service cluster-image-registry-operator created"
time="2018-10-03T23:09:13Z" level=info msg="Watching rbac.authorization.k8s.io/v1, ClusterRole, , 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching rbac.authorization.k8s.io/v1, ClusterRoleBinding, , 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, ConfigMap, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, Secret, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching v1, ServiceAccount, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=info msg="Watching route.openshift.io/v1, Route, openshift-image-registry, 0"
time="2018-10-03T23:09:13Z" level=error msg="failed to get resource client for (apiVersion:route.openshift.io/v1, kind:Route, ns:openshift-image-registry): failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(route.openshift.io/v1, Kind=Route): no matches for kind \"Route\" in version \"route.openshift.io/v1\""
panic: failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(route.openshift.io/v1, Kind=Route): no matches for kind "Route" in version "route.openshift.io/v1"

goroutine 1 [running]:
github.com/openshift/cluster-image-registry-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.Watch(0xc4206786a0, 0x15, 0x133c5d9, 0x5, 0xc420040040, 0x18, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/api.go:49 +0x4a8
main.watch(0xc4206786a0, 0x15, 0x133c5d9, 0x5, 0xc420040040, 0x18, 0x0)
        /go/src/github.com/openshift/cluster-image-registry-operator/cmd/cluster-image-registry-operator/main.go:39 +0x228
main.main()
        /go/src/github.com/openshift/cluster-image-registry-operator/cmd/cluster-image-registry-operator/main.go:82 +0x496

@wking
Copy link
Member Author

wking commented Oct 3, 2018

Looking into the "creating" issues from above:

$ kubectl logs -f -n tectonic-system kube-addon-operator-775d4c8f8d-frkqk
Error from server (BadRequest): container "kube-addon-operator" in pod "kube-addon-operator-775d4c8f8d-frkqk" is waiting to start: trying and failing to pull image

Dunno about that. Checking the pods again, a number of the API servers are terminating:

$ kubectl get --all-namespaces pods  | grep -v Running
NAMESPACE                            NAME                                                              READY     STATUS              RESTARTS   AGE
openshift-apiserver                  apiserver-4zwpm                                                   0/1       Terminating         0          15s
openshift-apiserver                  apiserver-tbt7p                                                   0/1       Terminating         0          15s
openshift-apiserver                  apiserver-zphpx                                                   0/1       Terminating         0          15s
openshift-controller-manager         controller-manager-7t8ds                                          0/1       ContainerCreating   0          43m
openshift-controller-manager         controller-manager-pdxkx                                          0/1       ContainerCreating   0          43m
openshift-controller-manager         controller-manager-v9vb5                                          0/1       ContainerCreating   0          43m
openshift-image-registry             cluster-image-registry-operator-869c995bc5-ccrn9                  0/1       CrashLoopBackOff    11         42m
openshift-kube-apiserver             apiserver-b4c7cdc89-cv9xc                                         0/1       ContainerCreating   0          43m
tectonic-system                      kube-addon-operator-775d4c8f8d-frkqk                              0/1       ImagePullBackOff    0          43m

@wking
Copy link
Member Author

wking commented Oct 3, 2018

API servers under openshift-apiserver seem to be creating and dying in an endless cycle:

$ kubectl get --all-namespaces pods  | grep -v Running
NAMESPACE                            NAME                                                              READY     STATUS              RESTARTS   AGE
openshift-apiserver                  apiserver-5fkvk                                                   0/1       ContainerCreating   0          2s
openshift-apiserver                  apiserver-j5f2j                                                   0/1       ContainerCreating   0          2s
openshift-apiserver                  apiserver-v8g7t                                                   0/1       ContainerCreating   0          2s
openshift-controller-manager         controller-manager-7t8ds                                          0/1       ContainerCreating   0          56m
openshift-controller-manager         controller-manager-pdxkx                                          0/1       ContainerCreating   0          56m
openshift-controller-manager         controller-manager-v9vb5                                          0/1       ContainerCreating   0          56m
openshift-image-registry             cluster-image-registry-operator-869c995bc5-ccrn9                  0/1       CrashLoopBackOff    13         55m
openshift-kube-apiserver             apiserver-b4c7cdc89-cv9xc                                         0/1       ContainerCreating   0          55m
tectonic-system                      kube-addon-operator-775d4c8f8d-frkqk                              0/1       ImagePullBackOff    0          55m
$ kubectl get --all-namespaces pods  | grep -v Running
NAMESPACE                            NAME                                                              READY     STATUS              RESTARTS   AGE
openshift-apiserver                  apiserver-5fkvk                                                   0/1       Terminating         0          14s
openshift-apiserver                  apiserver-v8g7t                                                   0/1       Terminating         0          14s
openshift-controller-manager         controller-manager-7t8ds                                          0/1       ContainerCreating   0          56m
openshift-controller-manager         controller-manager-pdxkx                                          0/1       ContainerCreating   0          56m
openshift-controller-manager         controller-manager-v9vb5                                          0/1       ContainerCreating   0          56m
openshift-image-registry             cluster-image-registry-operator-869c995bc5-ccrn9                  0/1       CrashLoopBackOff    13         55m
openshift-kube-apiserver             apiserver-b4c7cdc89-cv9xc                                         0/1       ContainerCreating   0          55m
tectonic-system                      kube-addon-operator-775d4c8f8d-frkqk                              0/1       ImagePullBackOff    0          55m
$ kubectl get --all-namespaces pods  | grep -v Running
NAMESPACE                            NAME                                                              READY     STATUS              RESTARTS   AGE
openshift-apiserver                  apiserver-46657                                                   0/1       Terminating         0          16s
openshift-apiserver                  apiserver-c5g2k                                                   0/1       Terminating         0          16s
openshift-apiserver                  apiserver-nvcfz                                                   0/1       Terminating         0          16s
openshift-controller-manager         controller-manager-7t8ds                                          0/1       ContainerCreating   0          57m
openshift-controller-manager         controller-manager-pdxkx                                          0/1       ContainerCreating   0          57m
openshift-controller-manager         controller-manager-v9vb5                                          0/1       ContainerCreating   0          57m
openshift-image-registry             cluster-image-registry-operator-869c995bc5-ccrn9                  0/1       CrashLoopBackOff    13         56m
openshift-kube-apiserver             apiserver-b4c7cdc89-cv9xc                                         0/1       ContainerCreating   0          57m
tectonic-system                      kube-addon-operator-775d4c8f8d-frkqk                              0/1       ImagePullBackOff    0          57m

@wking
Copy link
Member Author

wking commented Oct 3, 2018

Wandering around aimlessly:

$ kubectl get nodes -o yaml | grep '\sname:\|cpu\|memory'
    name: ip-10-0-136-219.ec2.internal
      cpu: "2"
      memory: 8070820Ki
      cpu: "2"
      memory: 8173220Ki
      message: kubelet has sufficient memory available
    name: ip-10-0-16-249.ec2.internal
      cpu: "2"
      memory: 3942040Ki
      cpu: "2"
      memory: 4044440Ki
      message: kubelet has sufficient memory available
    name: ip-10-0-2-234.ec2.internal
      cpu: "2"
      memory: 3942040Ki
      cpu: "2"
      memory: 4044440Ki
      message: kubelet has sufficient memory available
    name: ip-10-0-38-177.ec2.internal
      cpu: "2"
      memory: 3942040Ki
      cpu: "2"
      memory: 4044440Ki
      message: kubelet has sufficient memory available
    name: ip-10-0-8-240.ec2.internal
      cpu: "2"
      memory: 3942040Ki
      cpu: "2"
      memory: 4044440Ki
      message: kubelet has sufficient memory available

@wking
Copy link
Member Author

wking commented Oct 3, 2018

$ kubectl describe pods -n openshift-image-registry cluster-image-registry-operator-869c995bc5-ccrn9
Name:               cluster-image-registry-operator-869c995bc5-ccrn9
Namespace:          openshift-image-registry
Priority:           0
PriorityClassName:  <none>
Node:               ip-10-0-136-219.ec2.internal/10.0.136.219
Start Time:         Wed, 03 Oct 2018 22:45:47 +0000
Labels:             name=cluster-image-registry-operator
                    pod-template-hash=4257551671
Annotations:        openshift.io/scc=restricted
Status:             Running
IP:                 10.2.4.6
Controlled By:      ReplicaSet/cluster-image-registry-operator-869c995bc5
Containers:
  cluster-image-registry-operator:
    Container ID:  cri-o://1bd15a65433c6f0cf3674fdf522bd7355c4e42741a7efccfa328fda1fea63ed2
    Image:         registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2
    Image ID:      registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2
    Port:          60000/TCP
    Host Port:     0/TCP
    Command:
      cluster-image-registry-operator
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 03 Oct 2018 23:40:11 +0000
      Finished:     Wed, 03 Oct 2018 23:40:11 +0000
    Ready:          False
    Restart Count:  15
    Environment:
      WATCH_NAMESPACE:  openshift-image-registry (v1:metadata.namespace)
      OPERATOR_NAME:    cluster-image-registry-operator
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6p5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-6p6p5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6p6p5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                  Age                  From                                   Message
  ----     ------                  ----                 ----                                   -------
  Warning  FailedScheduling        58m (x310 over 1h)   default-scheduler                      0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.
  Warning  FailedCreatePodSandBox  55m                  kubelet, ip-10-0-136-219.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(bd2b553ae930d9afea620ac3bc9401828ae7a577b0637f77739324638d3d414e): open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  55m                  kubelet, ip-10-0-136-219.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(d27b30b91af951d2b96fcb67b9316b6c820ae1b0d8cd2681bcb2153cba249315): open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  54m                  kubelet, ip-10-0-136-219.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(dabf597aba822b351b50772943de49434d51ba26367877881a6a4d03c3b3c88a): open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  54m                  kubelet, ip-10-0-136-219.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-869c995bc5-ccrn9_openshift-image-registry_e112b284-c75c-11e8-ad65-1267b6294ade_0(5286b6bd6666d5ea8bd1d0e79dd35110598dd46b0f1c6488cbe7738312c71386): open /run/flannel/subnet.env: no such file or directory
  Normal   Pulling                 52m (x4 over 54m)    kubelet, ip-10-0-136-219.ec2.internal  pulling image "registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2"
  Normal   Pulled                  52m (x4 over 54m)    kubelet, ip-10-0-136-219.ec2.internal  Successfully pulled image "registry.svc.ci.openshift.org/ci-op-lpz1gxwg/stable@sha256:61b10a249a6efcf5ca2affd605365008115c1781fbd857b503f73d7091d23fd2"
  Normal   Created                 52m (x4 over 54m)    kubelet, ip-10-0-136-219.ec2.internal  Created container
  Normal   Started                 52m (x4 over 54m)    kubelet, ip-10-0-136-219.ec2.internal  Started container
  Warning  BackOff                 21s (x242 over 53m)  kubelet, ip-10-0-136-219.ec2.internal  Back-off restarting failed container

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@wking
Copy link
Member Author

wking commented Oct 4, 2018

/hold

Dropping pkg/types/config/parser.go is going to conflict with #400, which is also in the merge queue. I'll rebase and get another /lgtm on this once #400 lands.

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 4, 2018
@abhinavdahiya
Copy link
Contributor

@wking #400 merged you can rebase now. :)

With openshift-install, the config type is a one-way map from
InstallConfig to Terraform, so we can drop these methods.  The last
consumers were removed in b6c0d8c (installer: remove package,
2018-09-26, openshift#342).
@wking wking force-pushed the drop-config-parser branch from 33da58b to f7a4e68 Compare October 4, 2018 20:20
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Oct 4, 2018
@wking
Copy link
Member Author

wking commented Oct 4, 2018

Rebased onto master to pick up #400 with 33da58b -> f7a4e68.

/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 4, 2018
@abhinavdahiya
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 4, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [abhinavdahiya,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@abhinavdahiya
Copy link
Contributor

Workers failed to come up. Might be the dns issue with flannel and kube dns.

/test e2e-aws

@openshift-merge-robot openshift-merge-robot merged commit 72175dd into openshift:master Oct 5, 2018
@wking wking deleted the drop-config-parser branch October 5, 2018 02:56
wking added a commit to wking/openshift-installer that referenced this pull request Nov 16, 2018
We haven't needed these since we dropped the parsers in f7a4e68
(pkg/types/config: Drop ParseConfig and other Parse* methods,
2018-10-02, openshift#403).

Generated with:

  $ sed -i 's/ yaml:.*/`/' $(git grep -l yaml pkg/tfvars)
@wking wking mentioned this pull request Nov 16, 2018
wking added a commit to wking/openshift-installer that referenced this pull request Nov 18, 2018
This functionality was originally from 8324c21 (AWS: VPC subnets with
custom CIDRs and AZs per workers / masters, 2017-04-20,
coreos/tectonic-installer#267), but we haven't exposed it in
openshift-install (which has never used the parsers removed by
f7a4e68, pkg/types/config: Drop ParseConfig and other Parse* methods,
2018-10-02, openshift#403).

Currently we are unable to scale masters post-install, because
auto-scaling etcd is difficult.  Depending on how long that takes us
to get working, we may need to re-enable this for masters later.

Workers are already managable via the cluster API and MachineSets, so
folks who need custom worker subnets can create a cluster without
workers and then launch their worker machine-sets directly as a day-2
operation.  The cluster-API type chain is:

* MachineSet.Spec [1]
* MachineSetSpec.Template [2]
* MachineTemplateSpec.Spec [3]
* MachineSpec.ProviderConfig [4]
* ProviderConfig.Value [5]
* RawExtension

which is nice and generic, but a dead-end for structured configuration
;).  Jumping over to the OpenShift AWS provider, there is an
AWSMachineProviderConfig.Subnet [6].  I don't see code for
auto-creating those subnets, but an admin could manually create the
subnet wherever they wanted and then use the cluster API to launch new
workers into that subnet.  And maybe there will be generic tooling to
automate that subnet creation (setting up routing, etc.) to make that
less tedious/error-prone.

Also in this space, see [7,8]

[1]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L42
[2]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L68-L71
[3]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machineset_types.go#L84-L87
[4]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/machine_types.go#L62-L64
[5]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/common_types.go#L29-L34
[6]: https://github.com/openshift/cluster-api-provider-aws/blob/e6986093d1fbac2084c50b04fe2f78125ffca582/pkg/apis/awsproviderconfig/v1alpha1/awsmachineproviderconfig_types.go#L130-L131
[7]: kubernetes/kops#1333
[8]: https://github.com/kubernetes-sigs/cluster-api/blob/0734939e05aeb64ab198e3feeee8b4e90ee5cbb2/pkg/apis/cluster/v1alpha1/cluster_types.go#L62-L82
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants