-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an n1_highmem_8 nodepool to k8s-infra-prow-build to prep for migration #1172
Add an n1_highmem_8 nodepool to k8s-infra-prow-build to prep for migration #1172
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hasheddan, spiffxp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Will track migration under #1173 |
# use an UBUNTU image instead. Keep parity with the existing google.com | ||
# k8s-prow-builds/prow cluster by using the CONTAINERD variant | ||
image_type = "UBUNTU_CONTAINERD" | ||
machine_type = "n1-highmem-16" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welp. Looks like this is gonna need a redo
There is a suspicion that switching to n1-highmem-16 nodes has caused more jobs-per-node to be scheduled, thus leading to more contention over resources that aren't accounted for by the scheduler (such as IOPS)
This PR is the first step in undoing that change, by moving us back to n1-highmem-8's.
I am concerned that moving back to more, smaller nodes will cause us to hit our quota of IP's, and we'll have to shift jobs off of this cluster until we can find some other way to mitigate (ref: #1132 (comment))