-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CreateOrReplace not fully equivalent with apply
#3896
Comments
@manusa @rohanKanojia I'm not sure what all of the rationale behind not supporting apply were in the past. From what I can guess the logic to do this looks approximately like:
Obviously there's a lot of refinement needed and it's questionable whether we should reuse the same apply annotation, but it seems possible that we can offer a sensible alternative pretty quickly. What has come up previously as to why we haven't pursued a solution? |
Linking to #3334 as well for server side apply support. More than likely we don't even have to add the last-applied-configuration and can just rely on issuing patches as application/apply-patch+yaml |
See #3334 (comment) for 6.0 server side apply support. Additional apply support will be considered for 6.x. |
Yes. I'll follow the mentioned one. Tyvm |
@sunix @rohanKanojia it would probably be good to still have an issue for a higher-level apply or serverSideApply method. That will be easier for users to find. |
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions! |
is this fixed with #3937 ? |
No, see #3896 (comment). It would be nice to have a DSL method to perform the Server Side Apply logic, instead of manually configuring the patch. |
Describe the bug
Client Version: 5.12.1
K8s version: 1.19
Function createOrReplace is not equivalent to
kubectl apply
is instead behaves more likekubectl create
(as expressed in the code comments )Despite being an old comment this is what we're experiencing (kubernetes/kubernetes#25238 (comment))
what is currently happening is that we have
when using manually
kubectl apply
it behaves as expected, doing the proper rolling update (25%, max unavailable 0)when using the client it behaves like a create(or replace) operation forcing the deployment object to update to replicas 1 (default by k8s) and updating the replica set back to 1 this immediately terminates all remaining pods breaking what would else be a smooth deployment.
This completely invalidates the use of fabric8 client for applying manifests in a safe way.
Other refs:
kubernetes/kubernetes#25238 (comment)
https://www.linkedin.com/pulse/kubectl-createreplace-vs-apply-which-one-use-samrat-sen/
Fabric8 Kubernetes Client version
5.12.1@latest
Steps to reproduce
1.1 HPA 10 min replicas (10 just allows to easily observe the behaviour)
1.2 Deployment with no replicas specified, rolling update (max surge 25%, max unavailable 0)
Expected behavior
Second apply should show a smooth transition of the pods to the new ones.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
other (please specify in additional context)
Environment
Amazon
The text was updated successfully, but these errors were encountered: