-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ClusterVersionOperator to render operators at install. #330
Conversation
requires openshift/cluster-version-operator#22 to allow render of requires openshift/machine-config-operator#95 to correctly install MCO using CVO. |
/hold Holding while we enable CI on the new installer. |
There are few changes required based on discussion with network and master teams. |
/hold cancel |
Does it make sense for the swap from tectonic-network-operator -> cluster-network-operator to be a part of this PR? I think so. In order for this to happen, we'll also have to drop kube-core-operator and enable cluster-dns-operator too. |
pkg/asset/manifests/tectonic.go
Outdated
"99_tectonic-ingress-00-appversion.yaml": []byte(content.AppVersionTectonicIngress), | ||
"99_tectonic-ingress-01-cluster-config.yaml": applyTemplateData(content.ClusterConfigTectonicIngress, templateData), | ||
"99_tectonic-ingress-02-tls.yaml": applyTemplateData(content.TLSTectonicIngress, templateData), | ||
"99_tectonic-ingress-03-pull.yaml": applyTemplateData(content.PullTectonicIngress, templateData), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a json file
pkg/asset/manifests/tectonic.go
Outdated
"99_tectonic-system-00-binding-admin.yaml": []byte(content.BindingAdmin), | ||
"99_tectonic-system-01-ca-cert.yaml": applyTemplateData(content.CaCertTectonicSystem, templateData), | ||
"99_tectonic-system-02-privileged-scc.yaml": []byte(content.PriviledgedSccTectonicSystem), | ||
"99_tectonic-system-03-pull.yaml": applyTemplateData(content.PullTectonicSystem, templateData), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
json, not yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
json, not yaml
JSON is a subset of YAML, so a YAML parser (triggered by the .yaml
suffix?) should have no trouble with JSON content read from that file. Do we have JSON-only tooling that would care about distinguishing JSON files from YAML-with-features-outside-of-JSON files?
// Pull is the variable/constant representing the contents of the respective file | ||
Pull = template.Must(template.New("pull.json").Parse(` | ||
// PullTectonicIngress is the variable/constant representing the contents of the respective file | ||
PullTectonicIngress = template.Must(template.New("tectonic-ingress-03-pull.go").Parse(` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: tectonic-ingress-03-pull.json (not .go)
50b7358
to
0de8b65
Compare
/test e2e-aws |
The e2e error was:
I don't know what's going on there; it sounds like a failure message is the wrong type? Maybe something broke, and then we hit some error-handling bug while trying to complain about it, and this message is from the error-handling bug? |
That error is new and exciting - that may be a bug in the wait command - @deads2k |
/retest |
Running the tests locally i see this error $ make test-extended SUITE=core FOCUS="Secrets should be consumable from pods in volume with defaultMode set" TEST_EXTENDED_ARGS="-provider=aws -gce-zone=us-east-1"
test/extended/core.sh
[WARNING] REMINDER, EXTENDED TESTS NO LONGER START A CLUSTER.
[WARNING] THE CLUSTER REFERENCED BY THE 'KUBECONFIG' ENV VAR IS USED.
[INFO] Running tests against existing cluster...
[INFO] Running parallel tests N=<default> with focus Secrets should be consumable from pods in volume with defaultMode set
I1001 18:25:09.523675 9653 test.go:86] Extended test version v3.11.0-alpha.0+7e70dc8-1025
Running Suite: Extended
=======================
Random Seed: 1538443511 - Will randomize all specs
Will run 459 specs
Running in parallel across 5 nodes
Oct 1 18:25:11.551: INFO: >>> kubeConfig: /home/adahiya/go/src/github.com/openshift/installer/dev/auth/kubeconfig
Oct 1 18:25:11.553: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Oct 1 18:25:12.015: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 1 18:25:12.374: INFO: 36 / 36 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 1 18:25:12.374: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Oct 1 18:25:12.454: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 1 18:25:12.454: INFO: Dumping network health container logs from all nodes...
Oct 1 18:25:12.532: INFO: e2e test version: v1.11.0+d4cacc0
Oct 1 18:25:12.606: INFO: kube-apiserver version: v1.11.0+d4cacc0
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Oct 1 18:25:12.718: INFO: Running AfterSuite actions on all node
SS
------------------------------
Oct 1 18:25:12.718: INFO: Running AfterSuite actions on all node
Oct 1 18:25:12.719: INFO: Running AfterSuite actions on all node
Oct 1 18:25:12.719: INFO: Running AfterSuite actions on all node
[sig-storage] Secrets
should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
[BeforeEach] [Top Level]
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Oct 1 18:25:12.674: INFO: >>> kubeConfig: /home/adahiya/go/src/github.com/openshift/installer/dev/auth/kubeconfig
E1001 18:25:15.784881 10749 memcache.go:147] couldn't get resource list for packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request
STEP: Building a namespace api object
Oct 1 18:25:17.212: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Oct 1 18:25:18.185: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Oct 1 18:25:18.359: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-xzbls
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
STEP: Creating secret with name secret-test-05e4423f-c5e2-11e8-8f95-8c1645754cdc
STEP: Creating a pod to test consume secrets
Oct 1 18:25:18.859: INFO: Waiting up to 5m0s for pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc" in namespace "e2e-tests-secrets-xzbls" to be "success or failure"
Oct 1 18:25:18.935: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 75.721725ms
Oct 1 18:25:21.013: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153406965s
Oct 1 18:25:23.090: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231171447s
Oct 1 18:25:25.173: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313770803s
Oct 1 18:25:27.249: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389623067s
Oct 1 18:25:29.327: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.467949868s
Oct 1 18:25:31.404: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.545096869s
STEP: Saw pod success
Oct 1 18:25:31.404: INFO: Pod "pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc" satisfied condition "success or failure"
Oct 1 18:25:31.480: INFO: Trying to get logs from node ip-10-0-140-147.ec2.internal pod pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc container secret-volume-test: <nil>
STEP: delete the pod
Oct 1 18:25:31.757: INFO: Waiting for pod pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc to disappear
Oct 1 18:25:31.834: INFO: Pod pod-secrets-05f04b41-c5e2-11e8-8f95-8c1645754cdc no longer exists
[AfterEach] [sig-storage] Secrets
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Oct 1 18:25:31.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xzbls" for this suite.
Oct 1 18:35:32.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 1 18:35:32.812: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:32.812: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:35.763: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:35.763: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:38.719: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:38.719: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:41.663: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:41.663: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:44.612: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:44.612: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:47.562: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:47.562: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:50.514: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:50.514: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:53.461: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:53.461: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:56.410: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:56.410: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:35:59.361: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:35:59.361: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:36:02.312: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:36:02.312: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:36:05.262: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:36:05.262: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:36:08.210: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"packages.apps.redhat.com", Version:"v1alpha1"}
Oct 1 18:36:08.210: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request, retrying in 2s.
Oct 1 18:36:08.210: INFO: Couldn't delete ns: "e2e-tests-secrets-xzbls": timed out waiting for the condition (&errors.errorString{s:"timed out waiting for the condition"})
• Failure in Spec Teardown (AfterEach) [655.538 seconds]
[sig-storage] Secrets
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] [AfterEach]
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
Oct 1 18:36:08.210: Couldn't delete ns: "e2e-tests-secrets-xzbls": timed out waiting for the condition (&errors.errorString{s:"timed out waiting for the condition"})
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:319
------------------------------
Oct 1 18:36:08.216: INFO: Running AfterSuite actions on all node
Oct 1 18:36:08.217: INFO: Running AfterSuite actions on node 1
Summarizing 1 Failure:
[Fail] [sig-storage] Secrets [AfterEach] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/adahiya/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:319
Ran 1 of 459 Specs in 656.711 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 458 Skipped
Ginkgo ran 1 suite in 10m57.093580512s
Test Suite Failed
[INFO] Running serial tests with focus Secrets should be consumable from pods in volume with defaultMode set
I1001 18:36:08.482562 28490 test.go:86] Extended test version v3.11.0-alpha.0+7e70dc8-1025
[WARNING] No tests were selected
I1001 18:36:10.226380 29560 test.go:86] Extended test version v3.11.0-alpha.0+7e70dc8-1025
[WARNING] No tests were selected And running describe on
Seems like no |
Is this also picking up the change to kco that removed the openshift api server? |
So should we block package server from being installed by CVO temporarily? We can’t easily disable it in the payload, can we do something similar that lets us selectively disable them from CVO? |
@smarterclayton I'll work on that today. But in the meantime it is possible to ask olm team to fix this? |
Absolutely, let’s just open a Pr to temporarily remove their label in their
dockerfile (remove LABEL io.openshift.release.operator)
On Oct 2, 2018, at 8:57 AM, Abhinav Dahiya <[email protected]> wrote:
@smarterclayton <https://github.com/smarterclayton> I'll work on that
today. But in the meantime it is possible to ask olm team to fix this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#330 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p4nELCGXC5Tu54BbXYix83STXOvhks5ug2LNgaJpZM4W5x36>
.
|
operator-framework/operator-lifecycle-manager#496 merged. now we wait for release-image to be mirrored. |
/retest |
Oh, we need to change the job to pass in the release image for the PR to the installer (which avoids that wait and ensure we’re testing the right inputs). |
Ill make the change to the job to use OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE after this merges. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, crawford The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
These escaped the great purge of 0c6d53b (*: remove bazel, 2018-09-24, openshift#342). kubernetes/BUILD.bazel snuck in with 70ea0e8 (tests/smoke/vendor: switch from glide to dep, 2018-09-28, openshift#380), and tectonic/BUILD.bazel snuck in with e2d9fd3 (manifests: make tectonic/ flat dir, 2018-09-25, openshift#330). I'd guess both were due to rebases from commits originally made before openshift#342 landed.
Catching up with e2d9fd3 (manifests: make tectonic/ flat dir, 2018-09-25, openshift#330).
We've had config-operator rendering on the bootstrap node since 9994d37 (bootkube: render config.openshift.io resources, 2019-02-12, openshift#1187). Motivation for that commit isn't clear to me; [1] suggests maybe keeping CRDs out of the installer repository. But we run a rendered cluster-version operator on the bootstrap machine since 63e2750 (ignition: add CVO render to bootkube.sh, 2018-09-27, openshift#330), so we should be able to push resources at bootstrap time via the CVO. Remove CRDs from the config rendering, so we can see if things work without the config-rendered cluster-bootstrap pushes racing the bootstrap CVO pushes, or the config-rendered pushes not realizing they should filter out manifests annotated for capabilities that are not enabled. [1]: openshift#1187 (comment)
/cc @crawford