Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversion webhook incompatible with Flux GitOps management of nodepool manifests #6867

Closed
dschaaff opened this issue Aug 23, 2024 · 15 comments
Assignees
Labels
bug Something isn't working triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm.

Comments

@dschaaff
Copy link
Contributor

dschaaff commented Aug 23, 2024

Description

Observed Behavior:
When flux is managing the manifests for nodepools, the following is observed. I am starting from version 0.37.2

Update controller to v1.0.1, and then apply updated iam policy. Next update the manifests for ec2nodeclasses and nodepools to the v1 api version in git.

Flux fails to apply the nodepool manifests with the following error

karpenter-nodepools             main@sha1:1eb82518      False           False   NodePool/default dry-run failed: failed to prune fields: failed add back owned items: failed to convert pruned object at version karpenter.sh/v1: conversion webhook for karpenter.sh/v1beta1, Kind=NodePool failed: Post "https://karpenter.karpenter.svc:8443/conversion/karpenter.sh?timeout=30s": EOF

This can be reproduced by doing a dry run of the same manifest with Kubectl

$ kubectl apply --server-side=true -f flux/infra/karpenter-nodepools/default.yaml --dry-run=server --field-manager kustomize-
Error from server: failed to prune fields: failed add back owned items: failed to convert pruned object at version karpenter.sh/v1: conversion webhook for karpenter.sh/v1beta1, Kind=NodePool failed: Post "https://karpenter.karpenter.svc:8443/conversion/karpenter.sh?timeout=30s": EOF

The Karpenter controller logs the following error when this occurs

{
  "level": "ERROR",
  "time": "2024-08-23T20:03:38.118Z",
  "logger": "webhook",
  "message": "http: panic serving 10.24.217.58:49368: runtime error: invalid memory address or nil pointer dereference\ngoroutine 10304 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1903 +0xb0\npanic({0x2225100?, 0x4734a10?})\n\truntime/panic.go:770 +0x124\nsigs.k8s.io/karpenter/pkg/apis/v1.(*NodeClaimTemplate).convertFrom(0x40034c9310, {0x2f1bb28, 0x4004dfbe60}, 0x4000f6d708)\n\tsigs.k8s.io/[email protected]/pkg/apis/v1/nodepool_conversion.go:181 +0x188\nsigs.k8s.io/karpenter/pkg/apis/v1.(*NodePoolSpec).convertFrom(0x40034c9310, {0x2f1bb28, 0x4004dfbe60}, 0x4000f6d708)\n\tsigs.k8s.io/[email protected]/pkg/apis/v1/nodepool_conversion.go:145 +0xe8\nsigs.k8s.io/karpenter/pkg/apis/v1.(*NodePool).ConvertFrom(0x40034c9208, {0x2f1bb28?, 0x4004dfbe60?}, {0x2efd390?, 0x4000f6d600})\n\tsigs.k8s.io/[email protected]/pkg/apis/v1/nodepool_conversion.go:121 +0x124\nknative.dev/pkg/webhook/resourcesemantics/conversion.(*reconciler).convert(0x400098d8c0,{0x2f1bb28, 0x4004dfbc50}, {{0x400627e000, 0x297, 0x2c0}, {0x0, 0x0}}, {0x400498e880, 0xf})\n\tknative.dev/[email protected]/webhook/resourcesemantics/conversion/conversion.go:137 +0x119c\nknative.dev/pkg/webhook/resourcesemantics/conversion.(*reconciler).Convert(0x400098d8c0, {0x2f1bb28?, 0x4004dfbad0?}, 0x4001f5ab40)\n\tknative.dev/[email protected]/webhook/resourcesemantics/conversion/conversion.go:57 +0x174\nknative.dev/pkg/webhook.New.conversionHandler.func5({0x2f0c658, 0x4000bbf7a0}, 0x40087699e0)\n\tknative.dev/[email protected]/webhook/conversion.go:66 +0x24c\nnet/http.HandlerFunc.ServeHTTP(0x40000b7400?, {0x2f0c658?, 0x4000bbf7a0?}, 0x1d01d10?)\n\tnet/http/server.go:2171 +0x38\nnet/http.(*ServeMux).ServeHTTP(0x4004dfb800?, {0x2f0c658, 0x4000bbf7a0}, 0x40087699e0)\n\tnet/http/server.go:2688 +0x1a4\nknative.dev/pkg/webhook.(*Webhook).ServeHTTP(0x40000b7380, {0x2f0c658, 0x4000bbf7a0}, 0x40087699e0)\n\tknative.dev/[email protected]/webhook/webhook.go:310 +0xc4\nknative.dev/pkg/network/handlers.(*Drainer).ServeHTTP(0x400063a930, {0x2f0c658, 0x4000bbf7a0}, 0x40087699e0)\n\tknative.dev/[email protected]/network/handlers/drain.go:113 +0x158\nnet/http.serverHandler.ServeHTTP({0x2ef71d0?}, {0x2f0c658?, 0x4000bbf7a0?}, 0x6?)\n\tnet/http/server.go:3142 +0xbc\nnet/http.(*conn).serve(0x40036539e0, {0x2f1bb28, 0x4000a5ec60})\n\tnet/http/server.go:2044 +0x508\ncreated by net/http.(*Server).Serve in goroutine 287\n\tnet/http/server.go:3290 +0x3f0\n",
  "commit": "62a726c"
}

Expected Behavior:

GitOps tooling like flux should successfully apply the nodepool manifest from
git.

Reproduction Steps (Please include YAML):

Versions:

  • Chart Version: 1.0.1

  • Kubernetes Version (kubectl version): 1.30 (EKS)

  • Flux Version: 2.3.0

  • Manage Karpenter's installation with flux, I am installing to the karpenter namespace.

  • Update to version 1.0.1 from 0.37.2

---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: karpenter-crd
  namespace: flux-system
spec:
  chart:
    spec:
      chart: karpenter-crd
      reconcileStrategy: ChartVersion
      sourceRef:
        kind: HelmRepository
        name: karpenter
      version: 1.0.1
  install:
    createNamespace: true
  upgrade:
    remediation:
      retries: 1
  interval: 10m0s
  releaseName: karpenter-crd
  targetNamespace: karpenter
  storageNamespace: karpenter
  values:
    webhook:
      enabled: true
      serviceNamespace: karpenter
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: karpenter
  namespace: flux-system
spec:
  chart:
    spec:
      chart: karpenter
      reconcileStrategy: ChartVersion
      sourceRef:
        kind: HelmRepository
        name: karpenter
      version: 1.0.1
  install:
    createNamespace: true
    crds: Skip # handled by karpenter-crd chart
  upgrade:
    crds: Skip # handled by karpenter-crd chart
    remediation:
      retries: 1
  interval: 10m0s
  releaseName: karpenter
  targetNamespace: karpenter
  storageNamespace: karpenter
  values:
    webhook:
      enabled: true
...
  • modify the node pool manifest to api version v1
--- a/flux/infra/karpenter-nodepools/default.yaml
+++ b/flux/infra/karpenter-nodepools/default.yaml
@@ -1,4 +1,4 @@
  1 ⋮    │-apiVersion: karpenter.sh/v1
    ⋮  1 │+apiVersion: karpenter.sh/v1beta1
  2 ⋮  2 │ kind: NodePool
  3 ⋮  3 │ metadata:
  4 ⋮  4 │   name: default
@@ -14,10 +14,10 @@ spec:
 14 ⋮ 14 │   template:
 15 ⋮ 15 │     # metadata: {}
 16 ⋮ 16 │     spec:
 17 ⋮    │-      expireAfter: 336h0m0s
    ⋮ 17 │+      # expireAfter: 336h0m0s
 18 ⋮ 18 │       nodeClassRef:
 19 ⋮    │-        group: karpenter.k8s.aws
 20 ⋮    │-        kind: EC2NodeClass
    ⋮ 19 │+        # group: karpenter.k8s.aws
    ⋮ 20 │+        # kind: EC2NodeClass
 21 ⋮ 21 │         name: bottlerocket
 22 ⋮ 22 │       requirements:
 23 ⋮ 23 │         - key: karpenter.sh/capacity-type
diff --git a/flux/infra/karpenter/karpenter-helm.yaml b/flux/infra/karpenter/karpenter-helm.yaml
index 7bb956f..28457e3 100644
--- a/flux/infra/karpenter/karpenter-helm.yaml
+++ b/flux/infra/karpenter/karpenter-helm.yaml
@@ -12,7 +12,7 @@ spec:
 12 ⋮ 12 │       sourceRef:
 13 ⋮ 13 │         kind: HelmRepository
 14 ⋮ 14 │         name: karpenter
 15 ⋮    │-      version: 1.0.1
    ⋮ 15 │+      version: 0.37.2

workaround

In order to proceed I had to clear the managed fields on the nodepool objects

Here are the fields prior to wiping them out

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  annotations:
    compatibility.karpenter.sh/v1beta1-nodeclass-reference: '{"name":"bottlerocket"}'
    karpenter.sh/nodepool-hash: "3179949643060066316"
    karpenter.sh/nodepool-hash-version: v3
  creationTimestamp: "2024-08-23T00:07:33Z"
  generation: 1
  labels:
    kustomize.toolkit.fluxcd.io/name: karpenter-nodepools
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: karpenter.sh/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:disruption:
          f:consolidationPolicy: {}
        f:limits:
          f:cpu: {}
          f:memory: {}
        f:template:
          f:metadata: {}
          f:spec:
            f:nodeClassRef:
              f:name: {}
            f:requirements: {}
    manager: kustomize-controller
    operation: Apply
    time: "2024-08-23T00:07:33Z"
  - apiVersion: karpenter.sh/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations: {}
    manager: karpenter
    operation: Update
    time: "2024-08-23T00:07:33Z"
  - apiVersion: karpenter.sh/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:resources:
          .: {}
          f:cpu: {}
          f:ephemeral-storage: {}
          f:memory: {}
          f:pods: {}
    manager: karpenter
    operation: Update
    subresource: status
    time: "2024-08-23T17:50:17Z"
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:karpenter.sh/nodepool-hash: {}
          f:karpenter.sh/nodepool-hash-version: {}
    manager: karpenter
    operation: Update
    time: "2024-08-23T21:14:17Z"
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions: {}
        f:resources:
          f:hugepages-1Gi: {}
          f:hugepages-2Mi: {}
          f:hugepages-32Mi: {}
          f:hugepages-64Ki: {}
          f:nodes: {}
    manager: karpenter
    operation: Update
    subresource: status
    time: "2024-08-23T21:14:22Z"
  name: default
  resourceVersion: "456500391"
  uid: b999755c-b788-4a58-baec-4476a7eb03f7

I then ran the following command to clear the managed fields

$ kubectl proxy &
$ curl -X PATCH -H 'Content-Type: application/merge-patch+json' -d '{"metadata":{"managedFields": [{}]}}' http://127.0.0.1:8001/apis/karpenter.sh/v1beta1/nodepools/default

Flux was then able to successfully reconcile the manifest after the removal of the managed fields.

For comparison, here is what the managed fields look like after the reconcilation

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  annotations:
    compatibility.karpenter.sh/v1beta1-nodeclass-reference: '{"name":"bottlerocket"}'
    karpenter.sh/nodepool-hash: "11983763225509541388"
    karpenter.sh/nodepool-hash-version: v3
  creationTimestamp: "2024-08-23T00:07:33Z"
  generation: 2
  labels:
    kustomize.toolkit.fluxcd.io/name: karpenter-nodepools
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:disruption:
          f:consolidationPolicy: {}
        f:limits:
          f:cpu: {}
          f:memory: {}
        f:template:
          f:spec:
            f:expireAfter: {}
            f:nodeClassRef:
              f:group: {}
              f:kind: {}
              f:name: {}
            f:requirements: {}
    manager: kustomize-controller
    operation: Apply
    time: "2024-08-23T21:18:48Z"
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:compatibility.karpenter.sh/v1beta1-nodeclass-reference: {}
          f:karpenter.sh/nodepool-hash-version: {}
        f:labels:
          .: {}
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        .: {}
        f:disruption:
          .: {}
          f:budgets: {}
          f:consolidateAfter: {}
          f:consolidationPolicy: {}
        f:limits:
          .: {}
          f:memory: {}
        f:template:
          .: {}
          f:metadata: {}
          f:spec:
            .: {}
            f:nodeClassRef:
              .: {}
              f:group: {}
              f:kind: {}
              f:name: {}
            f:requirements: {}
    manager: before-first-apply
    operation: Update
    time: "2024-08-23T21:18:48Z"
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:karpenter.sh/nodepool-hash: {}
    manager: karpenter
    operation: Update
    time: "2024-08-23T21:18:48Z"
  - apiVersion: karpenter.sh/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:resources:
          f:cpu: {}
          f:ephemeral-storage: {}
          f:memory: {}
          f:nodes: {}
          f:pods: {}
    manager: karpenter
    operation: Update
    subresource: status
    time: "2024-08-23T21:19:34Z"
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@dschaaff dschaaff added bug Something isn't working needs-triage Issues that need to be triaged labels Aug 23, 2024
@dstrants
Copy link

We had a very similar issue after upgrading from 0.36.3 to 0.36.4. It looks related to #6849. The namespace was somehow ended up to kube-system although .Release.Namespace is karpenter and nothing else changed apart from the patch version bump

@booleanbetrayal
Copy link

@dstrants - This is due to karpenter chart packaging static CRDs which hardcode the namespace / service name for the conversion webhook. You might want to follow #6847

@cmartintlh
Copy link

We had a very similar issue after upgrading from 0.36.3 to 0.36.4. It looks related to #6849. The namespace was somehow ended up to kube-system although .Release.Namespace is karpenter and nothing else changed apart from the patch version bump

From what I can tell, these are not related issues. The webhook endpoint in the author's error is correctly referencing the karpenter namespace in which they deployed their controller, and the Karpenter controller is logging a failure for the conversion.

I'm receiving the same issue in my cluster, controlled through FluxCD and using the the karpenter and karpenter-crd Helm charts. I've also tried updating my manifest of the NodePool to match exactly what was converted on the cluster to no avail.

@Giaco9NN
Copy link

Hi,
I'm affected by the same issue. I use the official helm charts for both karpenter and the karpenter-crd. I updated both of them following the migration guide from 0.37.1 to 1.0.0. Our setup is based on Terraform, we use the helm_release resource to manage karpenter and the custom resource definitions. We manage node classes and node pools using the kubernetes_manifest resource. When updating any parameter of the node pool, I received the following error from Terraform:
failed to prune fields: failed add back owned items: failed to convert pruned object at version karpenter.sh/v1: conversion webhook for karpenter.sh/v1beta1, Kind=NodePool failed: Post "https://karpenter.kube-system.svc:8443/conversion/karpenter.sh?timeout=30s": EOF

Looking at the log of the karpenter pods, I received the very same error pointed out in the issue description.

When running the following:

kubectl get nodepool default -o yaml > nodepool.yaml
<Make any valid change to nodepool.yaml>
kubectl apply -f nodepool.yaml

Everything works as expected, but nothing is printed in the karpenter pods logs.

@dstrants
Copy link

@booleanbetrayal and @cmartintlh you are both right. #6847 is more relevant to what we are facing! Thanks for pointing out 🙏🏼

@engedaam
Copy link
Contributor

engedaam commented Sep 5, 2024

From looking at the calls made by the API server to the conversion webhooks on a server side apply, I see that the an empty nodepool spec is being sent to the conversion webhooks. This results in a nil pointer dereference on the nodeClassRef when we try converting that component. We should be able to mitigate this from the webhook side by checking the object first https://github.com/kubernetes-sigs/karpenter/blob/b69e975128ac9a511542f9f1d245a6d4c3f91112/pkg/apis/v1/nodepool_conversion.go#L179

@Giaco9NN
Copy link

Giaco9NN commented Sep 6, 2024

@engedaam thank you for your reply. At the moment, we paused the upgrade to karpenter v1.0.0 in all our environments. Only a test environment is based on that version now. Can we assume that when v1.1.0 is available and the conversion webhook is removed, we can upgrade successfully in two steps to version v1.1.0? We plan to upgrade our environments that are now based on version v0.37.1 by following these steps:

  1. Upgrade to v1.0.0. In this step, all resources will be converted by the webhook as expected, but Terraform will not be able to manage them
  2. Upgrade to v1.1.0. In this step the webhook doesn't exist anymore, so Terraform should be able to accomplish the apply operation.

Does it make sense?

@engedaam
Copy link
Contributor

engedaam commented Sep 9, 2024

@Giaco9NN Yeah, that makes sense. A good workaround here is also to use client side apply. Are you able to use that with Terraform? Flux offers the ability use client side which should help mitigate this issue: https://fluxcd.io/flux/faq/#why-are-kubectl-edits-rolled-back-by-flux

@engedaam engedaam self-assigned this Sep 9, 2024
@Sagiil
Copy link

Sagiil commented Sep 10, 2024

Same behavior with argocd,
I've upgraded the Karpenter CRD + controller successfully.
and only then I've converted the manifest files to API v1 with the new places for kubelet and expireAfter
when argo tried to sync i got the following error:

"failed to prune fields: failed add back owned items: failed to convert pruned object at version karpenter.sh/v1: conversion webhook for karpenter.sh/v1beta1, Kind=NodePool failed: Post "https://karpenter.karpenter.svc:8443/conversion/karpenter.sh?timeout=30s": EOF"

Anyone with argocd have solved it ?

@dcherniv
Copy link

@Sagiil you should consider following #6847
There's a long thread of various issues with the upgrade to v1 manifests.

@Giaco9NN
Copy link

Giaco9NN commented Sep 13, 2024

@engedaam I tried to leverage the field_manager functionality offered by the Terraform provider. This is the code:

field_manager {
  force_conflicts = true
  name            = "before-first-apply"
}

This way, I can modify the node pool. I aim to find a way to remove this field_manager because it's unclear to me what it exactly does.
I think the only way is to wait until version v1.1.0.

@engedaam
Copy link
Contributor

So we added a fix to the nodeclassRef to not panic on a nil deference to gracefully handle partial objects from the API server during Server side apply. The fix will be included as in an up coming patch release kubernetes-sigs/karpenter#1669. Until then, for those who can please try this snapshot to confirm the fix:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 021119463062.dkr.ecr.us-east-1.amazonaws.com

helm upgrade --install karpenter oci://021119463062.dkr.ecr.us-east-1.amazonaws.com/karpenter/snapshot/karpenter --version "0-fdf403caa0dce1c77bbbc0bda6863be017d118a7" --namespace "kube-system" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

@bdwyertech
Copy link
Contributor

Cool. I presume this will be backported to a 1.0.x? I think 1.1 removed the conversion webhooks.

@engedaam engedaam added triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm. and removed needs-triage Issues that need to be triaged labels Sep 18, 2024
@engedaam
Copy link
Contributor

Cool. I presume this will be backported to a 1.0.x? I think 1.1 removed the conversion webhooks.

Yes, we plan on backporting the fix to all version that contain the conversion webhooks.

@engedaam
Copy link
Contributor

Closing out this issue as the fix was merged kubernetes-sigs/karpenter#1669

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm.
Projects
None yet
Development

No branches or pull requests

9 participants