You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The jan 8 nightly release of Tekton Pipelines failed, so the most recent release is Jan 7. I found the pod that failed with:
kubectl --context dogfood get pods -l tekton.dev/pipelineRun=pipeline-release-nightly-4zc78-dv7rz
The tag-images step failed and the logs ended with:
+ for REGION in "${REGIONS[@]}"
+ for TAG in "latest" v20200109-74bf8b82bb
+ gcloud -q container images add-tag gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter@sha256:50ed8fc999392f349aea82f9ca7c7b85fd76a319ab3b3469c08f65a8f9cb96aa asia.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:latest
Created [asia.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:latest].
Updated [gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter@sha256:50ed8fc999392f349aea82f9ca7c7b85fd76a319ab3b3469c08f65a8f9cb96aa].
+ for TAG in "latest" v20200109-74bf8b82bb
+ gcloud -q container images add-tag gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter@sha256:50ed8fc999392f349aea82f9ca7c7b85fd76a319ab3b3469c08f65a8f9cb96aa asia.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:v20200109-74bf8b82bb
Created [asia.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:v20200109-74bf8b82bb].
Updated [gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/kubeconfigwriter@sha256:50ed8fc999392f349aea82f9ca7c7b85fd76a319ab3b3469c08f65a8f9cb96aa].
+ for IMAGE in "${BUILT_IMAGES[@]}"
+ IMAGE_WITHOUT_SHA=gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init:v20200109-74bf8b82bb
+ IMAGE_WITHOUT_SHA_AND_TAG=gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init
+ IMAGE_WITH_SHA=gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init@sha256:e475ad7e4fa81844f5a08c606be20f8af6830aa8aca1ef9e62684f5f65b472b9
+ gcloud -q container images add-tag gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init@sha256:e475ad7e4fa81844f5a08c606be20f8af6830aa8aca1ef9e62684f5f65b472b9 gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init:latest
Created [gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init:latest].
Updated [gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init@sha256:e475ad7e4fa81844f5a08c606be20f8af6830aa8aca1ef9e62684f5f65b472b9].
+ for REGION in "${REGIONS[@]}"
+ for TAG in "latest" v20200109-74bf8b82bb
+ gcloud -q container images add-tag gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init@sha256:e475ad7e4fa81844f5a08c606be20f8af6830aa8aca1ef9e62684f5f65b472b9 us.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init:latest
ERROR: Error during upload of: us.gcr.io/tekton-nightly/github.com/tektoncd/pipeline/cmd/creds-init:latest
ERROR: gcloud crashed (V2DiagnosticException): response: {'status': '504', 'content-length': '82', 'x-xss-protection': '0', 'transfer-encoding': 'chunked', 'server': 'Docker Registry', '-content-encoding': 'gzip', 'docker-distribution-api-version': 'registry/2.0', 'cache-control': 'private', 'date': 'Thu, 09 Jan 2020 02:16:19 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json'}
Unable to determine the upload's size.: None
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
So actually most of the process completed, but partway through tagging the images, gcloud crashed, I guess after receiving a 504?
I think this is probably just a flake (we'll see tomorrow) but if we see this again we might want to try to make the tagging steps more robust (e.g. maybe backoff and retry in that case?)
The text was updated successfully, but these errors were encountered:
The jan 8 nightly release of Tekton Pipelines failed, so the most recent release is Jan 7. I found the pod that failed with:
The
tag-images
step failed and the logs ended with:So actually most of the process completed, but partway through tagging the images, gcloud crashed, I guess after receiving a 504?
I think this is probably just a flake (we'll see tomorrow) but if we see this again we might want to try to make the tagging steps more robust (e.g. maybe backoff and retry in that case?)
The text was updated successfully, but these errors were encountered: