Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebuild/bump kubeadm CI image. #2228

Merged
merged 1 commit into from
Mar 13, 2017

Conversation

pipejakob
Copy link
Contributor

I wanted to rebuild this image to take advantage of some recent merges (#2094, #2179, #2182, #2183).

Based on my local testing, this should bring the kubeadm e2e job back to green.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 13, 2017
@krzyzacy krzyzacy self-assigned this Mar 13, 2017
@krzyzacy
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 13, 2017
@pipejakob pipejakob merged commit 865d8eb into kubernetes:master Mar 13, 2017
@krzyzacy
Copy link
Member

FYI seems the job is more broken now :-(

@pipejakob
Copy link
Contributor Author

So what I'm seeing is that when I converted this job from Jenkins to prow (a while ago), I kept the same name, but prow restarted the job run numbering, so new runs of this job are actually showing old logs from Jenkins runs back in January, since it's reusing old job run numbers. Here's an example of a fresh run that actually has logs from back in January, so they're not actually valid:

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-gce/464

I've used gsutil to delete some old logs around the upcoming sequence numbers to see if that would let new runs upload their logs, but now I'm getting no logs for those runs:

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-gce/467

I need to dig into why new runs are completely missing logs.

@pipejakob
Copy link
Contributor Author

To close the loop on this, I was able to capture logs from a live run of the job and found that GOOGLE_APPLICATION_CREDENTIALS wasn't being set. I used kubectl describe pod and verified that it wasn't a typo, that some of the environment variables were just missing. I have a PR for the fix now: #2246.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants