-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
createOrReplace methods should behave like kubectl #2454
Comments
I think Here is an example for doing k8-resource-yamls : $ kubectl apply -f ~/work/k8-resource-yamls/hello-deployment.yaml
deployment.apps/hello-dep created
k8-resource-yamls : $ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-dep 0/2 2 0 5s
hello-kubernetes 1/1 1 1 5d1h
random-generator 0/1 1 0 21h
k8-resource-yamls : $ kubectl apply -f ~/work/k8-resource-yamls/hello-deployment.yaml
deployment.apps/hello-dep unchanged When I check apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"hello-dep","namespace":"default"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"hello-dep"}},"template":{"metadata":{"labels":{"app":"hello-dep"}},"spec":{"containers":[{"image":"gcr.io/google-samples/hello-app:1.0","imagePullPolicy":"Always","name":"hello-dep","ports":[{"containerPort":8080}]}]}}}}
creationTimestamp: "2020-09-09T11:17:24Z"
generation: 1
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:progressDeadlineSeconds: {}
... Here is an example for createOrReplace with Service: k8-resource-yamls : $ kubectl create -f test-service.yml
service/my-service created
k8-resource-yamls : $ kubectl apply -f test-service.yml
service/my-service unchanged
k8-resource-yamls : $ kubectl get svc my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service ClusterIP 10.98.201.134 <none> 80/TCP 18m
k8-resource-yamls : $ kubectl get svc my-service -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":9376}],"selector":{"app":"MyApp"}}}
creationTimestamp: "2020-09-09T10:57:58Z" |
So basically kubectl adds a JSON snapshot of the provided YAML whenever a I'm not sure this is a behavior we want to implement (at least as a default behavior). If we add such an annotation it will pollute and clutter the annotations of any resource created with Kubernetes Client. I really don't see any advantage of going further with this. |
What shall we do about users complaining replace not getting ignored like this issue #2439 ? |
I think that issue is based on wrong expectations (IMHO). Issue:
I see many flaws here:
For me the use-case describe in #2439 needs some changes. IMO Providing an implementation to fix this will bring far more problems (see #2445 + all other resources which failed too) and will only help solving an issue that can probably be fixed by a much simpler approach. |
Hi @saparaj The easiest way would be to provide a comparison method yourself, since you know which fields are bound to be changed (in case these are more or less always the same). I really don't understand the purpose that lies beneath loading resources from a YAML file and then applying them several times. Could you elaborate a little bit more on what you are doing? I really think there must be something simpler that can be done. If this procedure is something you really need, and you need it in an abstract way. An approach would be to load the local list, retrieve each individual resource from the server, and finally merge the server resources with those loaded from your file. There are dozens of libraries that will do this for you, you can even use Jackson's Again, I would insist in doing something much simpler. In your case the only problem is the port value changing in your service (since it 's randomly selected by K8s). I would simply go by trying to get the service from the server and updating the port value before reapplying the list. |
Hi @manusa, What is the current logic of CreateOrReplace method then? I was assuming it will be similar to kubectl apply. Are there any low level apis that fabric8 client supports which we can leverage to write compare method ourselves? Thanks, |
You can see the current behavior in the following lines: Lines 408 to 420 in 65d3282
In a nutshell, as the method name suggests, the object is either created or replaced (always). The way apply works in kubectl is by serializing the object into an annotation of the same object prior to persisting it in the cluster. Then, when upserting the object, it compares the local version with the serialized annotation in the server. You could mimic I think all of this came up from #2439 (comment). Maybe you could provide a repo link or some code where this is being used so we can provide better and more suited alternatives for your use-case. |
@manusa Thank you for the detailed explanation. The usecase which I am trying to achieve is using fabric8io client to deploy manifest to a cluster which can happen n number of times and yaml manifest contains multiple resources. I got the issue #2439 when I tried to execute createOrReplace() twice for the same yaml only deployment docker path was updated. I found this old issue related to fabric8io plugin - fabric8io/fabric8-maven-plugin#894 so thought it might be similar in client as well. |
From Gitter @saparaj: is there a way to invoke server-side apply in fabric8io client? https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/ @manusa: Hi, this seems definitely the way to go with the createOrReplace issue I0108 07:08:58.326421 1889240 round_trippers.go:425] curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/apply-patch+yaml" -H "User-Agent: kubectl/v1.20.1 (linux/amd64) kubernetes/c4d7527" 'https://192.168.49.2:8443/apis/apps/v1/namespaces/default/deployments/nginx-deployment?fieldManager=kubectl&force=false'
I0108 07:08:58.334878 1889240 round_trippers.go:445] PATCH https://192.168.49.2:8443/apis/apps/v1/namespaces/default/deployments/nginx-deployment?fieldManager=kubectl&force=false 201 Created in 8 milliseconds
I0108 07:08:58.334896 1889240 round_trippers.go:451] Response Headers:
I0108 07:08:58.334905 1889240 round_trippers.go:454] Content-Type: application/json
I0108 07:08:58.334929 1889240 round_trippers.go:454] X-Kubernetes-Pf-Flowschema-Uid: 3a01a150-e4f9-4778-8b6d-7d8c0601e80a
I0108 07:08:58.334936 1889240 round_trippers.go:454] X-Kubernetes-Pf-Prioritylevel-Uid: 951d22f7-a734-4eb1-8213-01a4cc56ea13
I0108 07:08:58.334943 1889240 round_trippers.go:454] Content-Length: 1412
I0108 07:08:58.334949 1889240 round_trippers.go:454] Date: Fri, 08 Jan 2021 06:08:58 GMT
I0108 07:08:58.334955 1889240 round_trippers.go:454] Cache-Control: no-cache, private
I0108 07:08:58.334989 1889240 request.go:1107] Response Body: {"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"nginx-deployment","namespace":"default","uid":"119ab268-eda2-46ea-9b95-be0d628d1c86","resourceVersion":"799375","generation":1,"creationTimestamp":"2021-01-08T06:08:58Z","labels":{"app":"nginx"},"managedFields":[{"manager":"kubectl","operation":"Apply","apiVersion":"apps/v1","time":"2021-01-08T06:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{"f:app":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx\"}":{".":{},"f:image":{},"f:name":{},"f:ports":{"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{}}}}}}}}}}]},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"name":"nginx","image":"nginx:1.7.9","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"25%","maxSurge":"25%"}},"revisionHistoryLimit":10,"progressDeadlineSeconds":600},"status":{}}
deployment.apps/nginx-deployment serverside-applied |
@manusa As per documentation this server-side apply is supported in API server version 1.16.0 or greater. Can we have this added in fabric8io client instead of making createOrReplace to behave like kubectl apply? |
How should we proceed on this? Seems like Server Side Apply is just a plain PATCH request with |
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions! |
The Server-Side Apply needs more than just a plain PATCH, I'm not fully clear with all the details but the {
"kind":"Status",
"apiVersion":"v1",
"metadata":{
},
"status":"Failure",
"message":"metadata.managedFields must be nil",
"reason":"BadRequest",
"code":400
} Actually, the apply and update must be considered, so the apply operation is what must set the
Not quite sure how it would fit in the DSL but something like this probably needs to be added: ...
.serverSideApply(String fieldManager)
.createOrReplace(T resource) |
See related stack overflow Q&A at https://stackoverflow.com/questions/75887561/fabric8-kubernetesclient-equivalent-of-kubectl-apply-f |
Description
kubectl
only replaces server resources in case the local resource has changed and is different to the one deployed on the cluster.In case the local resource is
unchanged
no REST APIs will be hit.In scope of #2372 we tried to mimic the behavior of
kubectl
but the implemented methods inResourceComparison
are incomplete, and lead to #2445.We need to check how does
kubectl
resolve if the local resource is different or on the other handunchanged
compared to that in the cluster.Challenges:
metadata
andspec
fields are clearly not enoughThe text was updated successfully, but these errors were encountered: