-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable nodelocal dnscache on prow build clusters #1680
Conversation
infra/gcp/clusters/projects/k8s-infra-prow-build-trusted/prow-build-trusted/main.tf
Outdated
Show resolved
Hide resolved
infra/gcp/clusters/projects/k8s-infra-prow-build-trusted/prow-build-trusted/main.tf
Outdated
Show resolved
Hide resolved
EDIT: Saw the discussion kubernetes/test-infra#20716 |
/assign @spiffxp @BenTheElder |
It was because of this thread: kubernetes/test-infra#20716 (comment) |
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
This isn't actually updating k8s-infra-prow-build, but that's fine by me, I'll see what rollout looks like on k8s-infra-prow-build-trusted first
pod_namespace = "test-pods" // MUST match whatever prow is configured to use when it schedules to this cluster | ||
cluster_sa_name = "prow-build" // Name of the GSA and KSA that pods use by default | ||
boskos_janitor_sa_name = "boskos-janitor" // Name of the GSA and KSA used by boskos-janitor | ||
enable_node_local_dns_cache = "true" // Enable NodeLocal DNSCache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is unused right now
cluster_sa_name = "prow-build-trusted" // Name of the GSA and KSA that pods use by default | ||
gcb_builder_sa_name = "gcb-builder" // Name of the GSA and KSA that pods use to be allowed to run GCB builds and push to GCS buckets | ||
prow_deployer_sa_name = "prow-deployer" // Name of the GSA and KSA that pods use to be allowed to deploy to prow build clusters | ||
enable_node_local_dns_cache = "true" // Enable NodeLocal DNSCache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit to self: I guess meant to keep the locals
block for "vars that are going to be reused by multiple resources" vs. "all configurable things go up here", but I didn't comment as such (or do so consistently, e.g. bigquery_location
doesn't below up here by such convention)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder, chaodaiG, spiffxp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold cancel |
Took ~26min to apply (seems long, maybe due to this being a regional cluster?)
Meanwhile, from https://cloud.google.com/kubernetes-engine/docs/how-to/nodelocal-dns-cache#enabling
We are using maintenance windows
I'm probably going to force an upgrade to avoid waiting |
Per https://cloud.google.com/kubernetes-engine/docs/how-to/nodelocal-dns-cache#verifying_that_is_enabled, check that it's enabled:
So, not running. Confirming that it's been configured for the cluster:
So yeah, time to force nodes to upgrade |
Using
Just based on how long this has taken thus far, I suspect |
That sounds about right to me. And yes I have seen v1.17 in stable channel, so batching up makes sense to me |
Deployed to k8s-infra-prow-build-trusted. Need to see some job traffic to verify things are still working as expected
Timestamps if anyone needs to correlate disruptive behavior during this time.
|
being able to filter by cluster on deck would be nice, in the meantime curl https://prow.k8s.io/prowjobs.js?omit=pod_spec,decoration_config >prowjobs.js
<prowjobs.js jq \
'.items
| map(
select(
.spec.cluster == "k8s-infra-prow-build-trusted"
and .status.pendingTime >= "2021-02-19T14"
)
)
| sort_by(.status.pendingTime)
| map(
.status | {time: .pendingTime, state, url}
)' [
{
"time": "2021-02-19T14:16:35Z",
"state": "failure",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-csi-driver-smb-push-images/1362768045499486208"
},
{
"time": "2021-02-19T14:26:27Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/kops-postsubmit-push-to-staging/1362770527264968704"
},
{
"time": "2021-02-19T14:38:13Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-k8sio-gcr-prod-backup/1362773489949347840"
},
{
"time": "2021-02-19T15:08:13Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-k8sio-image-promo/1362781036726980608"
},
{
"time": "2021-02-19T15:10:26Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-k8sio-image-promo/1362781597782249472"
},
{
"time": "2021-02-19T15:44:26Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-csi-driver-host-path-push-images/1362790152912506880"
},
{
"time": "2021-02-19T15:54:28Z",
"state": "success",
"url": "https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-cluster-api-push-images/1362792678915313664"
}
] |
I'm satisfied, will open followup PR to apply to |
Opened #1686 |
Thank you both! |
No description provided.