-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make scalesetvms delete async #3799
Conversation
b1adac3
to
cea845e
Compare
5073e01
to
725db25
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #3799 +/- ##
==========================================
+ Coverage 54.77% 56.27% +1.50%
==========================================
Files 187 191 +4
Lines 19098 19383 +285
==========================================
+ Hits 10460 10908 +448
+ Misses 8070 7846 -224
- Partials 568 629 +61
☔ View full report in Codecov by Sentry. |
725db25
to
8e1ef08
Compare
@CecileRobertMichon @mboersma Should be ready for review, PTAL! |
/assign |
8e1ef08
to
eafe6b7
Compare
Hey @Jont828, is this PR still blocked? If not, I'll be happy to review it! |
@willie-yao I think I got confused, I meant that this PR is blocked because I need reviews. Just changed the label to "needs review!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Just some comments mostly for my own understanding
@willie-yao @mboersma @CecileRobertMichon Can we LGTM or is there anything else I should change? |
/test ls |
@CecileRobertMichon: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-azure-apiversion-upgrade |
func (ac *azureClient) GetResultIfDone(ctx context.Context, future *infrav1.Future) (compute.VirtualMachineScaleSetVM, error) { | ||
ctx, _, spanDone := tele.StartSpanWithLogger(ctx, "scalesetvms.azureClient.GetResultIfDone") | ||
defer spanDone() | ||
// CreateOrUpdateAsync is a dummy implementation to fulfill the async.Reconciler interface. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't need to do this, you should be able to do async.New(scope, nil, client),
when creating the service in scalesetvms.go so this client doesn't need to implement the Deleter interface
check out the disks client for a similar example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, in the disks service we don't attempt to call CreateOrUpdateResource()
so we can pass in a nil to the Creator interface. In this service, we want to get the resource if it exists and so I'm trying to leverage the CreateOrUpdateResource()
in the async interface to fetch the resource and handle the not found/transient errors. Alternatively, I could try to construct the client as well, but I felt that it would be clunky to declare a Reconciler and a client for each type of VM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I'm thinking about this more, I wonder if it might make more sense to add a Getter
(https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/main/azure/services/async/interfaces.go#L41) in the Service and simply call getter.Get()
in the Reconcile
func instead of using CreateOrUpdateResource
in a way it's not meant to be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking was to be able to take advantage of the error handling in this block of CreateOrUpdateResource()
so we don't have to duplicate that logic. Maybe we could add a Get()
function to the Reconciler
iface to give us the error handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mboersma what do you think? I'm concerned this won't work the same way with the async poller framework when we try to migrade scaleset vms to sdk v2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think this will act the same way in asyncpoller
; the CreateOrUpdateResource
code isn't really changed. I think we can merge this as-is and it won't present problems for refactoring SDKv2 on top of it.
eafe6b7
to
c4e5448
Compare
/test pull-cluster-api-provider-azure-apiversion-upgrade |
/retest |
c4e5448
to
ff61b9a
Compare
/test pull-cluster-api-provider-azure-apiversion-upgrade |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/assign @mboersma
LGTM label has been added. Git tree hash: 84618d271f474d4cb77fae91906576ac300dc9bf
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
I also wanted to see that VMSS Flex still works and it looks like -e2e-optional
passed. 👍🏻
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mboersma The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it: Implementation of an async service for managed agent pools as part of an effort to make all services async. See #1610 and #1541.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #2720
Special notes for your reviewer:
TODOs:
Release note: