Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: mirror tectonicClusterID as openshiftClusterID #817

Merged
merged 1 commit into from
Dec 11, 2018
Merged

*: mirror tectonicClusterID as openshiftClusterID #817

merged 1 commit into from
Dec 11, 2018

Conversation

staebler
Copy link
Contributor

@staebler staebler commented Dec 7, 2018

As part of removing references to tectonic, the tectonicClusterID tags will
be changed to openshiftClusterID. This is the first part of that. This
creates the openshiftClusterID tag that will be laid down alongside the
tectonicClusterID tag. After all of the usages of tectonicClusterID in other
repos are removed, then the tectonicClusterID will be removed, leaving only
the openshiftClusterID.

https://jira.coreos.com/browse/CORS-878

@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Dec 7, 2018
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 7, 2018
@staebler
Copy link
Contributor Author

staebler commented Dec 7, 2018

/test e2e-aws

As part of removing references to tectonic, the tectonicClusterID tags will
be changed to openshiftClusterID. This is the first part of that. This
creates the openshiftClusterID tag that will be laid down alongside the
tectonicClusterID tag. After all of the usages of tectonicClusterID in other
repos are removed, then the tectonicClusterID will be removed, leaving only
the openshiftClusterID.

https://jira.coreos.com/browse/CORS-878
@wking
Copy link
Member

wking commented Dec 7, 2018

/lgtm
/retest

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 7, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: staebler, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@wking
Copy link
Member

wking commented Dec 7, 2018

e2e-aws:

Waiting for deployment "router-default" rollout to finish: 0 of 1 updated replicas are available...
error: deployment "router-default" exceeded its progress deadline
error openshift-ingress/deploy/router-default did not come up
error: deployment "router-default" exceeded its progress deadline
error openshift-ingress/deploy/router-default did not come up
error: deployment "router-default" exceeded its progress deadline
error openshift-ingress/deploy/router-default did not come up
error: deployment "router-default" exceeded its progress deadline
timeout waiting for openshift-ingress/deploy/router-default to be available

/retest

@wking
Copy link
Member

wking commented Dec 8, 2018

e2e-aws first error:

STEP: Destroying namespace "e2e-tests-kubectl-qqfzc" for this suite.
Dec  8 00:06:19.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec  8 00:06:20.911: INFO: namespace: e2e-tests-kubectl-qqfzc, resource: bindings, ignored listing per whitelist
Dec  8 00:06:22.650: INFO: Couldn't delete ns: "e2e-tests-kubectl-qqfzc": Error while fetching pod metrics for selector  in namespace "e2e-tests-kubectl-qqfzc": no pods to fetch metrics for (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Error while fetching pod metrics for selector  in namespace \"e2e-tests-kubectl-qqfzc\": no pods to fetch metrics for", Reason:"", Details:(*v1.StatusDetails)(nil), Code:500}})
Dec  8 00:06:22.651: INFO: Running AfterSuite actions on all node
Dec  8 00:06:22.652: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/framework/framework.go:319]: Dec  8 00:06:22.650: Couldn't delete ns: "e2e-tests-kubectl-qqfzc": Error while fetching pod metrics for selector  in namespace "e2e-tests-kubectl-qqfzc": no pods to fetch metrics for (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Error while fetching pod metrics for selector  in namespace \"e2e-tests-kubectl-qqfzc\": no pods to fetch metrics for", Reason:"", Details:(*v1.StatusDetails)(nil), Code:500}})

Dec 08 00:05:48.674 I ns=openshift-monitoring deployment=prometheus-operator Scaled up replica set prometheus-operator-698c545957 to 1
Dec 08 00:05:48.674 I ns=openshift-monitoring replicaset=prometheus-operator-698c545957 Created pod: prometheus-operator-698c545957-k96wl
Dec 08 00:05:48.689 I ns=openshift-monitoring pod=prometheus-operator-698c545957-k96wl Successfully assigned openshift-monitoring/prometheus-operator-698c545957-k96wl to ip-10-0-170-196.ec2.internal
Dec 08 00:05:50.496 I ns=openshift-monitoring pod=prometheus-operator-698c545957-k96wl pulling image "quay.io/coreos/prometheus-operator:v0.26.0"
Dec 08 00:05:53.474 I ns=openshift-monitoring pod=prometheus-operator-698c545957-k96wl Successfully pulled image "quay.io/coreos/prometheus-operator:v0.26.0"
Dec 08 00:05:53.673 I ns=openshift-monitoring pod=prometheus-operator-698c545957-k96wl Created container
Dec 08 00:05:53.704 I ns=openshift-monitoring pod=prometheus-operator-698c545957-k96wl Started container
Dec 08 00:05:54.495 I ns=openshift-monitoring deployment=prometheus-operator Scaled down replica set prometheus-operator-668984475d to 0
Dec 08 00:05:54.507 W ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb node=ip-10-0-133-54.ec2.internal graceful deletion within 30s
Dec 08 00:05:54.573 I ns=openshift-monitoring replicaset=prometheus-operator-668984475d Deleted pod: prometheus-operator-668984475d-z4npb
Dec 08 00:05:54.681 I ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb Killing container with id cri-o://prometheus-operator:Need to kill Pod
Dec 08 00:05:55.706 W ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb node=ip-10-0-133-54.ec2.internal invariant violation (bug): pod should not transition Running->Pending even when terminated
Dec 08 00:05:55.706 W ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb node=ip-10-0-133-54.ec2.internal container=prometheus-operator container stopped being ready
Dec 08 00:05:56.984 W ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb node=ip-10-0-133-54.ec2.internal pod has been pending longer than a minute
Dec 08 00:06:00.897 W ns=openshift-monitoring pod=prometheus-operator-668984475d-z4npb node=ip-10-0-133-54.ec2.internal deleted

failed: (41s) 2018-12-08T00:06:22 "[sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure  [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] [Suite:openshift/smoke-4]"

Dunno what's up with that.

/retry

@staebler
Copy link
Contributor Author

staebler commented Dec 8, 2018

/test e2e-aws

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit e810901 into openshift:master Dec 11, 2018
wking added a commit to wking/openshift-installer that referenced this pull request Dec 11, 2018
Through e810901 (Merge pull request openshift#817 from
staebler/add_openshiftClusterID, 2018-12-10, openshift#822).
@staebler staebler deleted the add_openshiftClusterID branch December 13, 2018 13:02
hardys pushed a commit to hardys/installer that referenced this pull request Dec 14, 2018
This worked previously since we only had one tag in the filter
but since we added both via openshift#817 we exposed a bug which is we
should break after matching any key since we've deleted the
things and don't need to iterate over the filter keys anymore.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants