Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression with PVCs and node affinity #2796

Closed
jpds opened this issue Jun 9, 2020 · 4 comments · Fixed by #2768
Closed

Regression with PVCs and node affinity #2796

jpds opened this issue Jun 9, 2020 · 4 comments · Fixed by #2768
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@jpds
Copy link

jpds commented Jun 9, 2020

Expected Behavior

I have a pipeline that works fine in 0.12.1 but fails on 0.13.0.

Actual Behavior

Pod for pipeline fails with:

Events:
  Type     Reason             Age                From                Message
  ----     ------             ----               ----                -------
  Normal   NotTriggerScaleUp  72s                cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match node selector
  Warning  FailedScheduling   70s (x4 over 75s)  default-scheduler   0/1 nodes are available: 1 node(s) didn't match node selector.

Steps to Reproduce the Problem

  1. Deploy 0.13.0:
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.13.0/release.yaml
  1. Apply this pipeline:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: test-task
spec:
  steps:
    - name: test
      image: alpine:3.11.6
      script: |
        #!/bin/sh
        set -e
        echo "$(date -R): Test"
  workspaces:
  - name: test-dir
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: volume-pipeline
spec:
  tasks:
    - name: test
      taskRef:
        name: test-task
      workspaces:
        - name: test-dir
          workspace: volume-workspace
  workspaces:
  - name: volume-workspace
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: volume-pipelinerun
spec:
  pipelineRef:
    name: volume-pipeline
  podTemplate:
    nodeSelector:
      cloud.google.com/gke-nodepool: test-pool
    volumes:
    - name: volume-storage
  timeout: 2h
  workspaces:
    - name: volume-workspace
      volumeClaimTemplate:
        metadata:
          name: mypvc
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 2Gi
  1. Pod fails to start with above error.

  2. Remove 0.13.0.

  3. Install 0.12.1

  4. Apply above pipeline.

  5. Autoscaling succeeds:

Events:
  Type     Reason            Age                From                Message
  ----     ------            ----               ----                -------
  Warning  FailedScheduling  11s (x3 over 15s)  default-scheduler   0/1 nodes are available: 1 node(s) didn't match node selector.
  Normal   TriggeredScaleUp  11s                cluster-autoscaler  pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/project-jklasdfk334/zones/us-west1-a/instanceGroups/gke-test-regression-test-pool-48f4fc0a-grp 0->1 (max: 1)}]

Additional Info

  • Kubernetes version: 1.16.8-gke.15

  • Tekton Pipeline version: 0.13.0

@jlpettersson
Copy link
Member

jlpettersson commented Jun 9, 2020

1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod affinity rules

Hi, yes, this is a known bug.

Unfortunately the name of PipelineRun + Workspace, can only be up to 34 chars long. In your case, it is 35 chars. You can use shorter PipelineRun Name or Workspace Name, or disable the affinity assistant

The bug will be fixed by #2768 in 0.13.1

This is a duplicate of #2766

@vdemeester
Copy link
Member

/kind bug

@tekton-robot tekton-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 10, 2020
@vdemeester vdemeester added this to the Pipelines 0.13.1 🐱 milestone Jun 10, 2020
@bobcatfish
Copy link
Collaborator

Closing as duplicate of #2766, please re-open if there is a new issue here that we missed

@aeweidne
Copy link

aeweidne commented Jul 10, 2020

I am experiencing this on 0.14.0 -- nevermind, hitting this one #2829

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants