-
Notifications
You must be signed in to change notification settings - Fork 822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup greenhouse in k8s-infra-prow-build cluster #842
Comments
The n1-standard-32 puts k8s-infra-prow-builds over its CPU quota Looking at us-central1, here are the differences between k8s-infra-prow-build vs. k8s-prow-builds:
|
Submitted a quota request for 256 CPUs: current cluster mean CPU is ~40%, which works out to ~518, I chose a nearby power of two and cut it in half. Works out to 28 n1-highmem-8 nodes + the n1-standard-32 for greenhouse. |
Right. k8s-infra-prow-build is a regional cluster. So if I make a nodepool of size 1, it's actually 3 nodes, one in each zone. I don't think we want three greenhouses. I could:
I'm leaning B |
/assign @spiffxp /assign @thockin @BenTheElder |
Can I suggest :
I can't really test this but can work on it once #830 is merged. |
Unfortunately I tried D (force the location to a zone), |
Have greenhouse pod request 32 cores |
31-point-something to be fair :)
…On Wed, May 13, 2020 at 8:59 AM Aaron Crickenberger ***@***.***> wrote:
Have greenhouse pod request 32 cores
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I have it up and running and am waiting for #830 to land |
I've opened #885 Need to confirm greenhouse-metrics service is setup properly |
/close |
@spiffxp: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Greenhouse is a bazel remote caching setup used to provide faster build times for jobs that use bazel. PR jobs get the most benefit.
In the google.com build cluster (k8s-prow-builds/prow) it's deployed to a nodepool reserved specifically for it, tuned for max IOPS
ref: #752, followup to: #830
The text was updated successfully, but these errors were encountered: