Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The podgroup is not deleted after the pod created by Deployment is deleted #1853

Closed
hansongChina opened this issue Nov 24, 2021 · 9 comments
Closed
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@hansongChina
Copy link

What happened:
When changing Deployment triggers the creation of a new ReplicaSet and the pod created by the old ReplicaSet is deleted, the podgroup of the old pod is not deleted

What you expected to happen:

the podgroup of the old pod is deleted

How to reproduce it (as minimally and precisely as possible):

  • step1:Add the default volcano to pod using admission Webhooks, for example

patches = append(patches, patchOperation{ Op: "add", Path: "/spec/schedulerName", Value: "volcano", })

  • step2:Create a Deployment,for example
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
kubectl apply -f nginx-deployment.yaml

image

kubectl edit pod nginx-deployment-6b474476c4-v6k4x

image

kubectl edit pg  podgroup-6f08b2ae-296e-43a1-825c-84e483c3ea89

image

  • step3: Modify "image:nginx:1.14.2" in Deployment to trigger the creation of a new RS and a new POD. The old pod is deleted and the podgroup associated with the old POD still exists in the state of inqueue
    image
kubectl edit pg  podgroup-6f08b2ae-296e-43a1-825c-84e483c3ea89

image

Anything else we need to know?:

kind is workflow runs into the same problem

Environment:

  • Volcano Version: v1.4.0

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:56:40Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    image
    image

  • OS (e.g. from /etc/os-release):
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

@hansongChina hansongChina added the kind/bug Categorizes issue or PR as related to a bug. label Nov 24, 2021
@hansongChina
Copy link
Author

kind is workflow runs into the same problem

@Thor-wl Thor-wl added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 24, 2021
@Thor-wl Thor-wl added this to the v1.5 milestone Nov 24, 2021
@Thor-wl Thor-wl added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 25, 2021
@k82cn
Copy link
Member

k82cn commented Jan 7, 2022

For Deployment, we should create PodGroup for Deployment instead of ReplicaSet :)

@D0m021ng
Copy link

Normal Pod runs into the same problem.

@Thor-wl
Copy link
Contributor

Thor-wl commented Jan 21, 2022

Thanks for the all the feedback. This fix will be merged into v1.5.0.
/assign @lucming

@volcano-sh-bot
Copy link
Contributor

@Thor-wl: GitHub didn't allow me to assign the following users: lucming.

Note that only volcano-sh members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

Thanks for the all the feedback. This fix will be merged into v1.5.0.
/assign @lucming

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Thor-wl
Copy link
Contributor

Thor-wl commented Jan 21, 2022

Normal Pod runs into the same problem.

Can you give the reproduce steps? I've had a try but the performance looks good.

@stale
Copy link

stale bot commented Apr 24, 2022

Hello 👋 Looks like there was no activity on this issue for last 90 days.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity for 60 days, this issue will be closed (we can always reopen an issue if we need!).

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2022
@k82cn k82cn modified the milestones: v1.5, v1.6 May 7, 2022
@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 7, 2022
@stale
Copy link

stale bot commented Aug 10, 2022

Hello 👋 Looks like there was no activity on this issue for last 90 days.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity for 60 days, this issue will be closed (we can always reopen an issue if we need!).

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2022
@stale
Copy link

stale bot commented Oct 14, 2022

Closing for now as there was no activity for last 60 days after marked as stale, let us know if you need this to be reopened! 🤗

@stale stale bot closed this as completed Oct 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

5 participants