-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(NA): rebalance x-pack cigroups #85797
chore(NA): rebalance x-pack cigroups #85797
Conversation
@elasticmachine merge upstream |
Pinging @elastic/kibana-operations (Team:Operations) |
Current times on master:
@brianseeders raised a concern last time around limited resources with adding another CI group. If this is the case, it seems like we could drastically decrease the number of OSS ciGroups to compensate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- thank you for all your efforts here in keeping the build times down @mistic! 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Group change in x-pack/test/encrypted_saved_objects_api_integration/tests/index.ts
LGTM! Thanks for rebalancing!
This is primarily a problem on jobs that don't use the tasks framework (es snapshots, code coverage for example). In those jobs, oss and xpack cigroups run on different machines, so reducing the number of oss groups won't do anything for xpack. We could make the machines a little bigger for those jobs, if we need to |
…ana into rebalance-and-split-ci-groups
@mistic why did you add two new groups? I see ciGroup12 taking 38 minutes and ciGroup13 taking 25. Together that would be under the 1 hour average for the other ciGroups. |
@tylersmalley the best I could do with only one added ciGroup was getting the CI into 1h57 on https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/94337/ and we will stay with only a little room for new tests to be added. I think the best to do here is to add those two new groups until we found a different fix for those time consuming tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alerting changes LGTM
# Conflicts: # vars/kibanaCoverage.groovy
💚 Build SucceededMetrics [docs]Distributable file count
History
To update your PR or re-run it, just comment with: |
# Conflicts: # vars/kibanaCoverage.groovy Co-authored-by: Tiago Costa <[email protected]>
This reverts commit 1e3a483.
Reverted. This led to us hitting memory ceiling on the ES verification job, increased those instances in #86192 which resulted in getting |
After #85191 our CI was again over 2h20m and it seems very unstable reaching the overall timeout a couple of times which is generating a lot of failures. That change intends to rebalance the ciGroups by introducing a new
ciGroup12
and getting the CI back again into around 1h40m. I'm also expecting to get the CI more stable again once that change is merged in.