-
Notifications
You must be signed in to change notification settings - Fork 923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server-Side Apply in 1.26 set wrong fields ownership in managedFields #1337
Comments
It could be related with this new fields migration logic: |
/triage accepted |
Thanks @leoluz for your report. It seems you are trying to use an unsupported configuration - usage of multiple appliers on an object without all the appliers using server-side-apply. Although flow was not prevented before 1.26, fields owned by client-side-apply could not be easily moved to server-side-apply and users would become stuck. I think we can improve the messaging to the user on this since the warning does not tell you enough what went wrong; but, to me it is logical to ask the the user in this case to make a conscious decision about the ownership of the older fields. Users should follow the documented upgrade path before using server-side-apply features on a resource previously managed through client-side-apply so they can think about who should own the old fields in the new paradigm: https://kubernetes.io/docs/reference/using-api/server-side-apply/#upgrading-from-client-side-apply-to-server-side-apply In your case since you don't want If the object is already created and it is impossible to avoid using client-side-apply, then I tested the below general command that will move ownership of the old fields to a separate field manager so that your single-field apply works as you expect. After doing this you should always use kubectl apply -n default view-last-applied svc kubectl-1-26-ssa | kubectl apply --server-side -f - |
Closing this as expected behavior. Please reopen if you still think this is a bug. /close |
@alexzielenski: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this is the key piece of information that I missed even reading that doc several times in the past. Experimenting with previous |
What happened
Argo CD e2e tests started to fail after upgrading to kubectl-1.26. Investigating the issue it was verified that in
managedFields
, with the latest kubectl version, additional unexpected fields are associated with the manager that applied just a subset of fields.What you expected to happen
With kubectl 1.26, when server-side applying, managers should have ownership just with the fields it updates like the previous behaviour with kubectl 1.25.
How to reproduce it
With kubectl 1.25.3
Apply a simple service:
Add additional fields with a different manager
Inspect the managed fields:
Result:
As we can see, as expected, the manager
ssa-test
just have ownership with the fields it mutated:With kubectl 1.26.0
Apply the same service:
Add additional fields with a different manager:
At this point there is a new (unexpected?) warning:
Inspect the managed fields:
Result:
Now with kubectl 1.26.0 the
ssa-test
manager has full ownership of the service fields when I believe it shouldn't. Am I missing something?Environment:
kubernetes versions tested running on Docker-Desktop:
Client versions:
Server version:
The text was updated successfully, but these errors were encountered: