Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

macvtap.network.kubevirt.io/ do not get removed when uninstalling macvtap #121

Open
tstirmllnl opened this issue Aug 14, 2024 · 2 comments
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tstirmllnl
Copy link

What happened:
It also looks like macvtap.network.kubevirt.io/ resources from previous runs stay behind and don't get cleared when you remove the macvtap.

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                                  Requests    Limits
  --------                                  --------    ------
  cpu                                       702m (1%)   770m (1%)
  memory                                    815Mi (0%)  320Mi (0%)
  ephemeral-storage                         0 (0%)      0 (0%)
  hugepages-1Gi                             0 (0%)      0 (0%)
  hugepages-2Mi                             0 (0%)      0 (0%)
  devices.kubevirt.io/kvm                   0           0
  macvtap.network.kubevirt.io/dataplane      0           0
  macvtap.network.kubevirt.io/dataplanea     0           0
  macvtap.network.kubevirt.io/dataplaneab    0           0

What you expected to happen:
macvtap.network.kubevirt.io/ to not show up when running kubectl describe on a node.

How to reproduce it (as minimally and precisely as possible):

  1. Install macvtap.
  2. Run kubectl describe node <node> to see list of macvtap.network.kubevirt.io/ resources
  3. Remove macvtap from cluster
  4. Run kubectl describe node <node> and you will see a list of macvtap.network.kubevirt.io/ resources.

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): 1.1.0
  • Kubernetes version (use kubectl version): 1.23.9
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: Baremetal
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-1160.11.1.el7.x86_64
  • Other Tools: Multus Thick client(4.0.2)
@kubevirt-bot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 12, 2024
@kubevirt-bot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

2 participants