Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not schedule pod - incompatible with provisioner - no new nodes added #2899

Closed
haarchri opened this issue Nov 21, 2022 · 5 comments
Closed
Labels
question Issues that are support related questions

Comments

@haarchri
Copy link

Version

Karpenter Version: 0.16.3

Kubernetes Version: Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-fb459a0", GitCommit:"b07006b2e59857b13fe5057a956e86225f0e82b7", GitTreeState:"clean", BuildDate:"2022-10-24T20:32:54Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

Expected Behavior

karpenter will upscale and add new nodes to the cluster

Actual Behavior

karpenter will not upscale and no new node is added to the cluster

Steps to Reproduce the Problem

Resource Specs and Logs

the example-services-tenant provisioner spec looks like:

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: example-services-tenant
spec:
  labels:
    domain.node.example.cloud/example-services-tenant: "true"
  limits:
    resources:
      cpu: "1024"
      memory: 2Ti
  providerRef:
    name: bottlerocket-is-system
  requirements:
  - key: node.kubernetes.io/instance-type
    operator: In
    values:
    - m5d.xlarge
    - m5d.x2large
    - m5d.4xlarge
  - key: topology.kubernetes.io/zone
    operator: In
    values:
    - eu-central-1a
    - eu-central-1b
    - eu-central-1c
  - key: kubernetes.io/arch
    operator: In
    values:
    - amd64
  - key: karpenter.sh/capacity-type
    operator: In
    values:
    - on-demand
  ttlSecondsAfterEmpty: 30
  ttlSecondsUntilExpired: 2592000
status:
  resources:
    attachable-volumes-aws-ebs: "200"
    cpu: "80"
    ephemeral-storage: 2880558096Ki
    github.com/fuse: 40k
    memory: 321997112Ki
    pods: "1168"
2022-11-21T14:42:16.898Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.example.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.example.cloud/system-ingress=true:NoExecute; incompatible with provisioner "example-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements domain.node.example.cloud/example-services-dev-tenant In [true], karpenter.sh/provisioner-name In [example-services-tenant], kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], node.kubernetes.io/instance-type In [m5d.4xlarge m5d.x2large m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c]	{"commit": "5d4ae35-dirty", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-5sddyhex-project-970-concurrent-0v8cxc"}
2022-11-21T14:42:16.898Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.example.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.example.cloud/system-ingress=true:NoExecute; incompatible with provisioner "example-services-tenant", no instance type satisfied resources {"cpu":"5","github.com/fuse":"3","memory":"6644Mi","pods":"1"} and requirements kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], node.kubernetes.io/instance-type In [m5d.4xlarge m5d.x2large m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], domain.node.example.cloud/example-services-dev-tenant In [true], karpenter.sh/provisioner-name In [example-services-tenant]	{"commit": "5d4ae35-dirty", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-f9fs-aet-project-27-concurrent-0n6jmg"}

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@haarchri haarchri added the bug Something isn't working label Nov 21, 2022
@bwagner5
Copy link
Contributor

I'm not sure if this is the issue with the workload you are scheduling but there's a typo in one of your instance types:

  - key: node.kubernetes.io/instance-type
    operator: In
    values:
    - m5d.xlarge
    - m5d.x2large
    - m5d.4xlarge
m5d.x2large -> m5d.2xlarge

Are you able to post the Deployment spec that is not resulting in a new node?

@bwagner5 bwagner5 added question Issues that are support related questions and removed bug Something isn't working labels Nov 21, 2022
@haarchri
Copy link
Author

haarchri commented Dec 4, 2022

So we running now v0.19.3 same issue a lot of our gitlab jobs not sheduled - let me grep some manifests to dig deeper to find the root cause

@haarchri
Copy link
Author

haarchri commented Dec 4, 2022

@bwagner5 thanks we fixed the typo in provisioner - and bumped karpenter to v0.19.3 - but we still have the problem - looks like that this is one example pod:

removed env specific config - but in general it looks like this:

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/psp: eks.privileged
      pod-cleanup.gitlab.com/ttl: 2h
      policies.kyverno.io/last-applied-patches: |
        k4-alpha-dev-gitlab-runner--podman-fuse-svc.k4-alpha-dev-gitlab-runner--podman-fuse-svc.kyverno.io: replaced
          /spec/containers/1/resources/requests/cpu
        k4-alpha-dev-gitlab-runner-podman-fuse-build.k4-alpha-dev-gitlab-runner--podman-fuse-build.kyverno.io: removed
          /spec/containers/1/command
    creationTimestamp: "2022-12-04T22:16:40Z"
    generateName: runner-wyassimf-project-712-concurrent-0
    labels:
      pod: runner-wyassimf-project-712-concurrent-0
    name: runner-wyassimf-project-712-concurrent-0k7q9m
    namespace: k4-alpha-dev-gitlab-runner-gitlab-runner
    resourceVersion: "335033541"
    uid: c7034fa1-5786-4fa1-8245-37277add52d2
  spec:
    affinity: {}
    containers:
    - command:
      - sh
      - -c
      - "if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [
        -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec
        /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh
        \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ];
        then\n\texec /bin/sh \nelif [ -x /busybox/sh ]; then\n\texec /busybox/sh \nelse\n\techo
        shell not found\n\texit 1\nfi\n\n"
      image: example.com/platform/ci-tools:6.1.0
      imagePullPolicy: IfNotPresent
      name: build
      resources:
        limits:
          cpu: "2"
          github.com/fuse: "1"
          memory: 4Gi
        requests:
          cpu: 500m
          github.com/fuse: "1"
          memory: 2Gi
      securityContext:
        capabilities:
          add:
          - SETFCAP
          - NET_ADMIN
          - NET_RAW
          - SYS_ADMIN
          - MKNOD
          - SYS_CHROOT
        privileged: true
      stdin: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /scripts-712-8744199
        name: scripts
      - mountPath: /logs-712-8744199
        name: logs
      - mountPath: /builds
        name: repo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-hxb84
        readOnly: true
      - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
        name: aws-iam-token
        readOnly: true
    - command:
      - sh
      - -c
      - "if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [
        -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec
        /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh
        \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ];
        then\n\texec /bin/sh \nelif [ -x /busybox/sh ]; then\n\texec /busybox/sh \nelse\n\techo
        shell not found\n\texit 1\nfi\n\n"
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      - name: AWS_DEFAULT_REGION
        value: eu-central-1
      - name: AWS_REGION
        value: eu-central-1
      - name: AWS_ROLE_ARN
        value: arn:aws:iam::123456789101:role/k4-alpha-dev-gitlab-runner-gitlab-runner
      - name: AWS_WEB_IDENTITY_TOKEN_FILE
        value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
      image: registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-7178588d
      imagePullPolicy: IfNotPresent
      name: helper
      resources:
        limits:
          cpu: 900m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 500Mi
      securityContext:
        capabilities:
          add:
          - NET_RAW
          - SYS_ADMIN
          - MKNOD
          - SYS_CHROOT
          - SETFCAP
          - NET_ADMIN
        privileged: true
      stdin: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /scripts-712-8744199
        name: scripts
      - mountPath: /logs-712-8744199
        name: logs
      - mountPath: /builds
        name: repo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-hxb84
        readOnly: true
      - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
        name: aws-iam-token
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostAliases:
    - hostnames:
      - 123456789101.dkr.ecr.eu-central-1.amazonaws.com-devops-tools-tooling-docker-images-podman-runner
      - podman
      ip: 127.0.0.1
    imagePullSecrets:
    - name: docker-io-image-pull-secret
    - name: runner-wyassimf-project-712-concurrent-0kbrl5
    initContainers:
    - command:
      - sh
      - -c
      - touch /logs-712-8744199/output.log && (chmod 777 /logs-712-8744199/output.log
        || exit 0)
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      - name: AWS_DEFAULT_REGION
        value: eu-central-1
      - name: AWS_REGION
        value: eu-central-1
      - name: AWS_ROLE_ARN
        value: arn:aws:iam::123456789101:role/k4-alpha-dev-gitlab-runner-gitlab-runner
      - name: AWS_WEB_IDENTITY_TOKEN_FILE
        value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
      image: registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-7178588d
      imagePullPolicy: IfNotPresent
      name: init-permissions
      resources:
        limits:
          cpu: 900m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 500Mi
      securityContext:
        capabilities:
          add:
          - MKNOD
          - SYS_CHROOT
          - SETFCAP
          - NET_ADMIN
          - NET_RAW
          - SYS_ADMIN
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /scripts-712-8744199
        name: scripts
      - mountPath: /logs-712-8744199
        name: logs
      - mountPath: /builds
        name: repo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-hxb84
        readOnly: true
      - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
        name: aws-iam-token
        readOnly: true
    nodeName: ip-100-64-4-62.eu-central-1.compute.internal
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: k4-alpha-dev-gitlab-runner-gitlab-runner-gitlab-runner
    serviceAccountName: k4-alpha-dev-gitlab-runner-gitlab-runner-gitlab-runner
    terminationGracePeriodSeconds: 0
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: github.com/fuse
      operator: Exists
    volumes:
    - name: aws-iam-token
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            audience: sts.amazonaws.com
            expirationSeconds: 86400
            path: token
    - emptyDir: {}
      name: repo
    - configMap:
        defaultMode: 511
        name: runner-wyassimf-project-712-concurrent-0-scriptsn99fk
        optional: false
      name: scripts
    - emptyDir: {}
      name: logs
    - name: kube-api-access-hxb84
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@akesser
Copy link

akesser commented Dec 5, 2022

Here is an other example of a pod manifest hat lead to the same error:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
    pod-cleanup.gitlab.com/ttl: 2h
    policies.kyverno.io/last-applied-patches: |
      k4-alpha-dev-gitlab-runner--podman-fuse-svc.k4-alpha-dev-gitlab-runner--podman-fuse-svc.kyverno.io: replaced
        /spec/containers/1/image
      k4-alpha-dev-gitlab-runner-podman-fuse-build.k4-alpha-dev-gitlab-runner--podman-fuse-build.kyverno.io: removed
        /spec/containers/1/command
  creationTimestamp: "2022-12-05T09:57:55Z"
  generateName: runner-wyassimf-project-25-concurrent-3
  labels:
    pod: runner-wyassimf-project-25-concurrent-3
  name: runner-wyassimf-project-25-concurrent-37jkbf
  namespace: k4-alpha-dev-gitlab-runner-gitlab-runner
  resourceVersion: "335850895"
  uid: 82bc87e7-2620-4487-bc17-1e96a50dfeb0
spec:
  affinity: {}
  containers:
  - command:
    - sh
    - -c
    - "if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x
      /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec
      /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif
      [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec
      /bin/sh \nelif [ -x /busybox/sh ]; then\n\texec /busybox/sh \nelse\n\techo shell
      not found\n\texit 1\nfi\n\n"
    image: registry.dev.sh/banking-platform/tooling-docker-images/x-ray-tools:22.04.25
    imagePullPolicy: IfNotPresent
    name: build
    resources:
      limits:
        cpu: "2"
        github.com/fuse: "1"
        memory: 4Gi
      requests:
        cpu: 500m
        github.com/fuse: "1"
        memory: 2Gi
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        - SYS_ADMIN
        - MKNOD
        - SYS_CHROOT
        - SETFCAP
      privileged: true
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /scripts-25-8747526
      name: scripts
    - mountPath: /logs-25-8747526
      name: logs
    - mountPath: /builds
      name: repo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-xv7rp
      readOnly: true
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
  - command:
    - sh
    - -c
    - "if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x
      /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec
      /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif
      [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec
      /bin/sh \nelif [ -x /busybox/sh ]; then\n\texec /busybox/sh \nelse\n\techo shell
      not found\n\texit 1\nfi\n\n"
    image: registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-7178588d
    imagePullPolicy: IfNotPresent
    name: helper
    resources:
      limits:
        cpu: 900m
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 500Mi
    securityContext:
      capabilities:
        add:
        - SYS_ADMIN
        - MKNOD
        - SYS_CHROOT
        - SETFCAP
        - NET_ADMIN
        - NET_RAW
      privileged: true
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /scripts-25-8747526
      name: scripts
    - mountPath: /logs-25-8747526
      name: logs
    - mountPath: /builds
      name: repo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-xv7rp
      readOnly: true
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostAliases:
  - hostnames:
    - 12345.dkr.ecr.eu-central-1.amazonaws.com-devops-tools-tooling-docker-images-podman-runner
    - podman
    ip: 127.0.0.1
  imagePullSecrets:
  - name: docker-io-image-pull-secret
  - name: runner-wyassimf-project-25-concurrent-3wfv9b
  initContainers:
  - command:
    - sh
    - -c
    - touch /logs-25-8747526/output.log && (chmod 777 /logs-25-8747526/output.log
      || exit 0)
    image: registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-7178588d
    imagePullPolicy: IfNotPresent
    name: init-permissions
    resources:
      limits:
        cpu: 900m
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 500Mi
    securityContext:
      capabilities:
        add:
        - MKNOD
        - SYS_CHROOT
        - SETFCAP
        - NET_ADMIN
        - NET_RAW
        - SYS_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /scripts-25-8747526
      name: scripts
    - mountPath: /logs-25-8747526
      name: logs
    - mountPath: /builds
      name: repo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-xv7rp
      readOnly: true
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
  nodeName: ip-100-64-36-105.eu-central-1.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: k4-alpha-dev-gitlab-runner-gitlab-runner-gitlab-runner
  serviceAccountName: k4-alpha-dev-gitlab-runner-gitlab-runner-gitlab-runner
  terminationGracePeriodSeconds: 0
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: github.com/fuse
    operator: Exists
  volumes:
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token
  - emptyDir: {}
    name: repo
  - configMap:
      defaultMode: 511
      name: runner-wyassimf-project-25-concurrent-3-scriptsxflxq
      optional: false
    name: scripts
  - emptyDir: {}
    name: logs
  - name: kube-api-access-xv7rp
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace

and the corresponding log from carpenter:

2022-12-05T09:58:42.859Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.859Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.859Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.867Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.867Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.868Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.874Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.875Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.875Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.880Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.880Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.881Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.886Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.886Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.886Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.891Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.891Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.891Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.896Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.896Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.896Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.901Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.901Z	DEBUG	controller.provisioning	144 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.902Z	DEBUG	controller.provisioning	14 out of 465 instance types were excluded because they would breach provisioner limits	{"commit": "27a51c0"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"7","github.com/fuse":"4","memory":"8692Mi","pods":"1"} and requirements kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant], node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c]	{"commit": "27a51c0", "pod": "k4-alpha-dev2-gitlab-runner-gitlab-runner/runner-ng54vhki-project-25-concurrent-5xpbgk"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"7","github.com/fuse":"4","memory":"8692Mi","pods":"1"} and requirements node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant]	{"commit": "27a51c0", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-wyassimf-project-25-concurrent-4j6s8g"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant]	{"commit": "27a51c0", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-wyassimf-project-25-concurrent-0zrlmw"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant], node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64]	{"commit": "27a51c0", "pod": "k4-alpha-dev2-gitlab-runner-gitlab-runner/runner-ng54vhki-project-25-concurrent-3r5kvl"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant], node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c]	{"commit": "27a51c0", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-wyassimf-project-25-concurrent-124x97"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant], node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge]	{"commit": "27a51c0", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-wyassimf-project-25-concurrent-2zxlqc"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant], node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64]	{"commit": "27a51c0", "pod": "k4-alpha-dev2-gitlab-runner-gitlab-runner/runner-ng54vhki-project-25-concurrent-425rs2"}
2022-12-05T09:58:42.908Z	ERROR	controller.provisioning	Could not schedule pod, incompatible with provisioner "core-system", did not tolerate domain.node.cloud/system-core=true:NoExecute; incompatible with provisioner "ingress-system", did not tolerate domain.node.cloud/system-ingress=true:NoExecute; incompatible with provisioner "internal-services-tenant", no instance type satisfied resources {"cpu":"3","github.com/fuse":"2","memory":"4596Mi","pods":"1"} and requirements node.kubernetes.io/instance-type In [m5d.2xlarge m5d.4xlarge m5d.xlarge], topology.kubernetes.io/zone In [eu-central-1a eu-central-1b eu-central-1c], kubernetes.io/arch In [amd64], karpenter.sh/capacity-type In [on-demand], kubernetes.io/os In [linux], domain.node.cloud/internal-services-dev-tenant In [true], karpenter.sh/provisioner-name In [internal-services-tenant]	{"commit": "27a51c0", "pod": "k4-alpha-dev-gitlab-runner-gitlab-runner/runner-wyassimf-project-25-concurrent-37jkbf"}

@tzneal
Copy link
Contributor

tzneal commented Dec 5, 2022

There is a github.com/fuse resource on your pod. Karpenter isn't aware of that resource and doesn't know which instance types would report that resource if it launched them, so it doesn't launch any instances.

See issue kubernetes-sigs/karpenter#751

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Issues that are support related questions
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants