-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Adding controller runtime create #9736
🌱 Adding controller runtime create #9736
Conversation
Signed-off-by: muhammad adil ghaffar <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-cluster-api-e2e-full-main |
@@ -386,7 +386,7 @@ func ClusterctlUpgradeSpec(ctx context.Context, inputGetter func() ClusterctlUpg | |||
Expect(workloadClusterTemplate).ToNot(BeNil(), "Failed to get the cluster template") | |||
|
|||
log.Logf("Applying the cluster template yaml to the cluster") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log.Logf("Applying the cluster template yaml to the cluster") | |
log.Logf("Creating the cluster template yaml to the cluster") |
@@ -104,7 +104,7 @@ func ApplyAutoscalerToWorkloadCluster(ctx context.Context, input ApplyAutoscaler | |||
}, | |||
}) | |||
Expect(err).ToNot(HaveOccurred(), "failed to parse %s", workloadYamlTemplate) | |||
Expect(input.WorkloadClusterProxy.Apply(ctx, workloadYaml)).To(Succeed(), "failed to apply %s", workloadYamlTemplate) | |||
Expect(input.WorkloadClusterProxy.Create(ctx, workloadYaml)).To(Succeed(), "failed to apply %s", workloadYamlTemplate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expect(input.WorkloadClusterProxy.Create(ctx, workloadYaml)).To(Succeed(), "failed to apply %s", workloadYamlTemplate) | |
Expect(input.WorkloadClusterProxy.Create(ctx, workloadYaml)).To(Succeed(), "failed to create %s", workloadYamlTemplate) |
/area e2e-testing |
@fabriziopandini Before I start reviewing this PR, do we have consensus in general that it would be a good thing to use the CR client instead of os.Exec + kubectl apply? I think it's a good thing to get rid of our dependency to a kubectl binary and use CR as library instead. I think using create is a reasonable approach if we only use apply today when we want to create new resources. There is no apply in the CR client (we could think about patch/update, but if we don't need it, it's probably not worth it) Something I didn't think about before. Potentially we want to keep the test coverage of actually using the kubectl binary like our users do. One difference is e.g. that kubectl apply adds the last-applied-configuration annotation |
@@ -253,6 +259,27 @@ func (p *clusterProxy) Apply(ctx context.Context, resources []byte, args ...stri | |||
return exec.KubectlApply(ctx, p.kubeconfigPath, resources, args...) | |||
} | |||
|
|||
// Create creates using the controller-runtime client. | |||
func (p *clusterProxy) Create(ctx context.Context, resources []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a create func? Why not use GetClient().Create ?
I'm a bit concerned about silently ignoring IsAlreadyExists errors on this level (could lead to all sorts of surprises for the caller of this func if its YAML is not deployed)
I went through the linked issues and I think the main thing we wanted to achieve was to get better error output. So instead of some unreadable ExitError (https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1722000996239937536) we wanted to get the actual output. I've opend a PR to improve that: #9737 |
if apierrors.IsAlreadyExists(err) { | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubectl create
fails when the resource already exists, while apply
informs (through logging) that the resource already exists, and updates it in place.
What problem are we trying to solve here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See: #9736 (comment)
/hold |
/close because we merged #9737 which serves the same purpose |
/close |
@sbueringer: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
Adding controller runtime Create
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes part of #9696 , KubectlApply is still in use when we are applying with args.