-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 add MachineDeployment scale test #4647
🌱 add MachineDeployment scale test #4647
Conversation
/test pull-cluster-api-e2e-full-main |
624b212
to
021b34f
Compare
/test pull-cluster-api-e2e-full-main |
021b34f
to
91c44c1
Compare
I'm not familiar with the e2e costs and duration, If there were concerns to keep those shrunk one could argue that this is implicitly tested by cluster upgrade and drain timeout. Otherwise this is good to have and lgtm. |
/retest |
In general i'm +1 to this change since it will be better to have a clear signal on scale up and scale down than having to rely on the scale in the upgrade test. lgtm besides nits above |
1699757
to
cc262b6
Compare
test/e2e/md_scale.go
Outdated
WaitForMachineDeployments: input.E2EConfig.GetIntervals(specName, "wait-worker-nodes"), | ||
}, clusterResources) | ||
|
||
Expect(clusterResources.MachineDeployments[0].Spec.Replicas).ToNot(BeNil()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoelSpeed @CecileRobertMichon I added these lines, I think this is the right place to deal with ensuring the hygiene of the replicas property. If that property is nil, we want to know about it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in fact after further investigation the strict nil checks against Replicas aren't required as long as we use the ginkgo Equal(
method, as that method includes nil checks, and if nil returns a specific error about that.
@@ -70,6 +70,7 @@ func GetMachineDeploymentsByCluster(ctx context.Context, input GetMachineDeploym | |||
|
|||
deployments := make([]*clusterv1.MachineDeployment, len(deploymentList.Items)) | |||
for i := range deploymentList.Items { | |||
Expect(deploymentList.Items[i].Spec.Replicas).ToNot(BeNil()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoelSpeed @CecileRobertMichon more Replicas non-nil hygiene
/retest |
/lgtm |
MachineDeployment: clusterResources.MachineDeployments[0], | ||
Replicas: 3, | ||
WaitForMachineDeployments: input.E2EConfig.GetIntervals(specName, "wait-worker-nodes"), | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would adding an assertion for the new number of replicas make sense here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ScaleAndWaitMachineDeployment
does that for us in this case. The earlier assertion is to get a validated baseline before going through the scaling flow(s).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha, I saw the implicit assertion now. During my earlier pass, I was looking for an explicit assertion in the ScaleAndWaitMD
and hence the comment. Make sense, feel free to resolve.
MachineDeployment: clusterResources.MachineDeployments[0], | ||
Replicas: 1, | ||
WaitForMachineDeployments: input.E2EConfig.GetIntervals(specName, "wait-worker-nodes"), | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
cc262b6
to
35f72cd
Compare
/test pull-cluster-api-e2e-full-main |
/retest |
35f72cd
to
fb3e760
Compare
/test pull-cluster-api-e2e-full-main |
fb3e760
to
dac8d41
Compare
/retest |
/test pull-cluster-api-e2e-full-main |
2 similar comments
/test pull-cluster-api-e2e-full-main |
/test pull-cluster-api-e2e-full-main |
@fabriziopandini this is ready for final review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jackfrancis Thanks for adding a new test to the CAPI E2E test suite!
/lgtm
) | ||
|
||
// MachineDeploymentScaleSpecInput is the input for MachineDeploymentScaleSpec. | ||
type MachineDeploymentScaleSpecInput struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
future work: expose the number of worker nodes we scale up to, so people reusing things Spec on providers might decide to have more (or less) workers according to their needs. As soon as we have HollowMachines in CAPD I plan to increase this for CAPI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: CecileRobertMichon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
This PR adds a new E2E test to validate MachineDeployment scale scenarios.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #