-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove ws-scheduler #7430
Remove ws-scheduler #7430
Conversation
970eeb8
to
2ebdabe
Compare
Codecov Report
@@ Coverage Diff @@
## main #7430 +/- ##
===========================================
+ Coverage 11.46% 29.68% +18.21%
===========================================
Files 20 117 +97
Lines 1177 19290 +18113
===========================================
+ Hits 135 5727 +5592
- Misses 1039 13074 +12035
- Partials 3 489 +486
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
2b58854
to
0126c9d
Compare
0126c9d
to
e77577c
Compare
/werft run 👍 started the job as gitpod-build-aledbf-scheduler.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
This does raise an interesting question with regards to the Installer - unlike Helm, the Installer doesn't remove Kubernetes objects that are no longer used. What that means is that an installation that's deploying this change will still have their ws-scheduler
resources in their cluster.
@csweichel should we work out a way of a deployment removing unused resources when running the kubectl apply -f gitpod.yaml
(for the record, I don't have any thoughts on how we might achieve that within the existing Installer workflow)
LGTM label has been added. Git tree hash: acabba8e62748faa1d2e0b8e65b6f6ca8b4494f1
|
/approve |
we'll want to eventually implement the |
/lgtm |
/assign @jankeromnes |
/assign @geropl @aledbf Awesome that we moved past beyond this... crutch. 🎉 (also: far well 🥲 ) Without looking at the changes in detail (looks like it's a plain removal) I have two questions:
|
The idea here is to rely on the standard kubernetes scheduler or use one of the scheduler-plugins. For the majority of the scenarios, the standard scheduler is the right choice.
Since last week, the workspace clusters gen27 are running without ghost and |
Thx, will read up on this. 🙂
@aledbf Alright, that'll work. Will add a notice to next deployment (tomorrow morning). /lgtm |
/hold Holding to ensure we do not merge before tomorrows webapp deployment. Not strictly necessary, but keeps things simple in case sth else goes haywire. Will remove after the deployment went smoothly. |
/approve no-issue |
e77577c
to
c6fdc8f
Compare
c6fdc8f
to
1f516a3
Compare
/lgtm |
LGTM label has been added. Git tree hash: 5507bb3a1677dbfacaa56f64e1e0c5485f98301c
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: csweichel, geropl, iQQBot, MrSimonEmms Associated issue requirement bypassed by: geropl The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
webapp deployment happened - let's do this /hold cancel |
Description
Remove custom ws-scheduler component
How to test
Release Notes