-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identify quota changes needed for scalability jobs, create pool of scalability projects #851
Comments
I created It got stuck on CPU quota I was able to retrieve project info for I filed a request to raise quota for
The quota got bumped during the job run, and the job was able to complete successfully. Open questions:
|
Jobs that use scalability-project 100 node release-blocking tests (definitely migrate these)
100 node kubemark tests (these look like CI variants of merge-blocking jobs so make room for them)
experiment tests (I would hold off on these for now)
|
I did a visual inspection of this with all regions, so filter down to just the two that had meaningful differences for p in $(<~/w/kubernetes/test-infra/config/prow/cluster/boskos-resources.yaml yq -r '.resources[] | select (.type=="scalability-project").names[]'); do
echo $p...;
diff --ignore-all-space \
<(gcloud compute regions list --project=$p)
<(gcloud compute regions list --project=k8s-infra-e2e-gce-project) |\
grep -E "us-east1|us-central1|---"
done
So I'm inclined to suggest we stick with 125 CPU / in-use addresses for us-east1 The existing pool is 16 projects, but there are 5 jobs I'm not sure we want to move over yet. So I'm going to setup 10 projects |
Blocked until #852 is resolved |
No longer blocked. I've created a pool of 5 projects to start with via changes in #898 |
With an eye toward: kubernetes/test-infra#18550 Based on visual inspection of https://monitoring.prow.k8s.io/d/wSrfvNxWz/boskos-resource-usage?orgId=1&from=now-90d&to=now Currently k8s-prow-build's boksos has two pools:
Currently k8s-infra-prow-build has:
It's not clear to me whether the presubmit projects have different quotas than the regular projects |
Added a canary job via kubernetes/test-infra#19049 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/92316/pull-kubernetes-e2e-gce-100-performance-canary/1299860480084414464/ - confirmed that the existing scalability-project pool will work for presubmits Based on visual inspection of https://monitoring.prow.k8s.io/d/wSrfvNxWz/boskos-resource-usage?orgId=1&from=now-90d&to=now, after removing a kubemark presubmit out of the set of merge-blocking jobs for kubernetes (ref: kubernetes/test-infra#18788) k8s-prow-build's boksos has two pools:
k8s-infra-prow-build has:
Going to provision scalability pool up to 30 projects (5+17+3=25 + 5 overhead) |
Opened #1192 to grow the pool |
@spiffxp: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The default quotas for an e2e project (eg: k8s-infra-e2e-gce-project) are insufficient to run ci-kubernetes-e2e-gci-gce-scalability
Currently this job runs in the google.com k8s-prow-builds cluster, using a project from that boskos'
scalability-project
poolThe text was updated successfully, but these errors were encountered: