Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server-Side Apply in 1.26 set wrong fields ownership in managedFields #1337

Closed
leoluz opened this issue Dec 16, 2022 · 6 comments
Closed

Server-Side Apply in 1.26 set wrong fields ownership in managedFields #1337

leoluz opened this issue Dec 16, 2022 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/api-expression Categorizes an issue or PR as relevant to WG API Expression.

Comments

@leoluz
Copy link

leoluz commented Dec 16, 2022

What happened

Argo CD e2e tests started to fail after upgrading to kubectl-1.26. Investigating the issue it was verified that in managedFields, with the latest kubectl version, additional unexpected fields are associated with the manager that applied just a subset of fields.

What you expected to happen

With kubectl 1.26, when server-side applying, managers should have ownership just with the fields it updates like the previous behaviour with kubectl 1.25.

How to reproduce it

With kubectl 1.25.3

Apply a simple service:

cat <<EOF | kubectl -n default apply -f -
apiVersion: v1
kind: Service
metadata:
  name: kubectl-1-25-ssa
  annotations:
    some-annotation-key: some-annotation-value
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
EOF

Add additional fields with a different manager

cat <<EOF | kubectl apply -n default --server-side --field-manager=ssa-test --validate=false --force-conflicts -f -
apiVersion: v1
kind: Service
metadata:
  name: kubectl-1-25-ssa
  annotations:
    ssa-annotation-key: ssa-annotation-value
spec:
  type: NodePort
EOF

Inspect the managed fields:

kubectl -n default get svc kubectl-1-25-ssa -oyaml --show-managed-fields

Result:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"some-annotation-key":"some-annotation-value"},"name":"kubectl-1-25-ssa","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"targetPort":80}]}}
    some-annotation-key: some-annotation-value
    ssa-annotation-key: ssa-annotation-value
  creationTimestamp: "2022-12-16T15:17:29Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:ssa-annotation-key: {}
      f:spec:
        f:type: {}
    manager: ssa-test
    operation: Apply
    time: "2022-12-16T15:17:48Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
          f:some-annotation-key: {}
      f:spec:
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:sessionAffinity: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2022-12-16T15:17:29Z"
  name: kubectl-1-25-ssa
  namespace: default
  resourceVersion: "1767360"
  uid: 8f55f697-adcb-41c0-b6dd-27d36069b2b2
spec:
  clusterIP: 10.99.53.88
  clusterIPs:
  - 10.99.53.88
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 32719
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

As we can see, as expected, the manager ssa-test just have ownership with the fields it mutated:

  • f:ssa-annotation-key
  • f:type

With kubectl 1.26.0

Apply the same service:

cat <<EOF | kubectl -n default apply -f -
apiVersion: v1
kind: Service
metadata:
  name: kubectl-1-26-ssa
  annotations:
    some-annotation-key: some-annotation-value
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
EOF

Add additional fields with a different manager:

cat <<EOF | kubectl apply -n default --server-side --field-manager=ssa-test --validate=false --force-conflicts -f -
apiVersion: v1
kind: Service
metadata:
  name: kubectl-1-26-ssa
  annotations:
    ssa-annotation-key: ssa-annotation-value
spec:
  type: NodePort
EOF

At this point there is a new (unexpected?) warning:

Warning: failed to re-apply configuration after performing Server-Side Apply migration. This is non-fatal and will be retried next time you apply. Error: Service "kubectl-1-26-ssa" is invalid: spec.ports: Required value

Inspect the managed fields:

kubectl -n default get svc kubectl-1-26-ssa -oyaml --show-managed-fields

Result:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"some-annotation-key":"some-annotation-value"},"name":"kubectl-1-26-ssa","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"targetPort":80}]}}
    some-annotation-key: some-annotation-value
    ssa-annotation-key: ssa-annotation-value
  creationTimestamp: "2022-12-16T15:23:21Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
          f:some-annotation-key: {}
          f:ssa-annotation-key: {}
      f:spec:
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: ssa-test
    operation: Apply
    time: "2022-12-16T15:23:28Z"
  name: kubectl-1-26-ssa
  namespace: default
  resourceVersion: "1767635"
  uid: acfdf010-9e62-4a5c-846e-9b15ff20262e
spec:
  clusterIP: 10.98.140.69
  clusterIPs:
  - 10.98.140.69
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 30333
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Now with kubectl 1.26.0 the ssa-test manager has full ownership of the service fields when I believe it shouldn't. Am I missing something?

Environment:

kubernetes versions tested running on Docker-Desktop:

Client versions:

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/amd64"}

Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:43Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/amd64"}

Server version:

Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:27:13Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"linux/amd64"}

@leoluz leoluz added the kind/bug Categorizes issue or PR as related to a bug. label Dec 16, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 16, 2022
@leoluz
Copy link
Author

leoluz commented Dec 16, 2022

It could be related with this new fields migration logic:
alexzielenski/kubernetes@ac09100
cc @alexzielenski

@apelisse
Copy link
Member

/triage accepted
/wg api-expression

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/api-expression Categorizes an issue or PR as relevant to WG API Expression. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 16, 2022
@alexzielenski
Copy link
Contributor

alexzielenski commented Dec 16, 2022

Thanks @leoluz for your report.

It seems you are trying to use an unsupported configuration - usage of multiple appliers on an object without all the appliers using server-side-apply. Although flow was not prevented before 1.26, fields owned by client-side-apply could not be easily moved to server-side-apply and users would become stuck.

I think we can improve the messaging to the user on this since the warning does not tell you enough what went wrong; but, to me it is logical to ask the the user in this case to make a conscious decision about the ownership of the older fields.

Users should follow the documented upgrade path before using server-side-apply features on a resource previously managed through client-side-apply so they can think about who should own the old fields in the new paradigm: https://kubernetes.io/docs/reference/using-api/server-side-apply/#upgrading-from-client-side-apply-to-server-side-apply

In your case since you don't want ssa-test to own the fields I suggest you change the initial apply to use --server-side from the beginning. The warning is telling you ssa-test is now responsible for ports, but it was not found in the patch.

If the object is already created and it is impossible to avoid using client-side-apply, then I tested the below general command that will move ownership of the old fields to a separate field manager so that your single-field apply works as you expect. After doing this you should always use --server-side to manage the object.

kubectl apply -n default view-last-applied svc kubectl-1-26-ssa | kubectl apply --server-side -f -

@alexzielenski
Copy link
Contributor

Closing this as expected behavior. Please reopen if you still think this is a bug.

/close

@k8s-ci-robot
Copy link
Contributor

@alexzielenski: Closing this issue.

In response to this:

Closing this as expected behavior. Please reopen if you still think this is a bug.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@leoluz
Copy link
Author

leoluz commented Dec 16, 2022

It seems you are trying to use an unsupported configuration - usage of multiple appliers on an object without all the appliers using server-side-apply.

I think this is the key piece of information that I missed even reading that doc several times in the past. Experimenting with previous kubectl versions, my understanding was that both ssa and csa could coexist while manipulating the same object.
Tks for the clarification!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/api-expression Categorizes an issue or PR as relevant to WG API Expression.
Projects
None yet
Development

No branches or pull requests

4 participants