-
Notifications
You must be signed in to change notification settings - Fork 104
k8s_scale fixes #100
k8s_scale fixes #100
Conversation
Based on #35 - merge that one first |
Codecov Report
@@ Coverage Diff @@
## master #100 +/- ##
==========================================
- Coverage 42.98% 42.88% -0.11%
==========================================
Files 3 3
Lines 542 541 -1
Branches 110 110
==========================================
- Hits 233 232 -1
Misses 266 266
Partials 43 43
Continue to review full report at Codecov.
|
f3f534f
to
441397b
Compare
Latest test is giving:
...which is weird, because it seems like |
Move wait logic out of raw and into common and use that logic in scale Fix a few broken wait condition cases highlighted by scaling up and down Move scale-related tests into dedicated test task file Additional service related tests
441397b
to
beebe98
Compare
@geerlingguy the bit where it was failing was in adding a port to an existing service, and also setting NodePort at the same time (I suspect I mis-merged two separate changes at some point) - just need to be consistently all NodePorts or all ClusterIPs. I've changed it to the latter. |
See https://github.com/engineerd/setup-kind/releases/tag/v0.4.0 for details of the EXDEV issue
Now this is failing with the same thing mentioned in #104 —
|
Yeah, there's definitely a race condition, not 100% sure on the best course of action - Kubernetes is doing the right thing, the tests are doing the right thing, just perhaps too soon. I might just have to improve the filters to ignore the unready pod where its container has terminated but the pod is still not in terminating state (that bit is weird) |
Move wait logic out of raw and into common and use that logic in scale Fix a few broken wait condition cases highlighted by scaling up and down Move scale-related tests into dedicated test task file Additional service related tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM on initial review—I like that we're removing net 93 LoC in the python modules and fixing bugs at the same time!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks great, removes a lot of cruft and moves some common logic from raw -> common.
Merging—it also seems to fix the instability with our CI tests, so I can stop getting nightly fail notifications from the scheduled CI run :) |
SUMMARY
Fix scale wait and add tests
Move wait logic out of raw and into common and use that
logic in scale
Fix a few broken wait condition cases highlighted by scaling
up and down
Move scale-related tests into dedicated test task file
ISSUE TYPE
COMPONENT NAME
k8s_scale
ADDITIONAL INFORMATION