Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to disable preinstalled bridge CNI(s) #15797

Closed
simondrake opened this issue Feb 7, 2023 · 9 comments
Closed

unable to disable preinstalled bridge CNI(s) #15797

simondrake opened this issue Feb 7, 2023 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@simondrake
Copy link

simondrake commented Feb 7, 2023

What Happened?

Due to a reboot, I had to destroy and recreate minikube. Without changing any of the settings, I keep getting the following error and no amount of Googling so far has helped determine what the problem is.

E0207 14:18:49.957299    7777 start.go:415] unable to disable preinstalled bridge CNI(s): failed to configure non-podman bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;: exit status 1
stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory

❌  Exiting due to RUNTIME_ENABLE: error copying tempfile /tmp/minikube4087305689 to dst /etc/docker/daemon.json: sudo cp -a /tmp/minikube4087305689 /etc/docker/daemon.json: exit status 1
stdout:

stderr:
cp: cannot create regular file '/etc/docker/daemon.json': No such file or directory

I'm running minikube in a multipass VM, but the multipass config also hasn't been changed.

Is this a known issue / am I doing something stupid here?

Attach the log file

* 
* ==> Audit <==
* |---------|--------------------------------------------------------------|----------|--------|---------|---------------------|----------|
| Command |                             Args                             | Profile  |  User  | Version |     Start Time      | End Time |
|---------|--------------------------------------------------------------|----------|--------|---------|---------------------|----------|
| start   | --extra-config=apiserver.service-node-port-range=30000-39999 | minikube | ubuntu | v1.29.0 | 07 Feb 23 14:07 GMT |          |
|         | --kubernetes-version 1.20.15 --driver=none                   |          |        |         |                     |          |
|---------|--------------------------------------------------------------|----------|--------|---------|---------------------|----------|

* 
* ==> Last Start <==
* Log file created at: 2023/02/07 14:07:59
Running on machine: minikube
Binary: Built with gc go1.19.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0207 14:07:59.431556    7356 out.go:296] Setting OutFile to fd 1 ...
I0207 14:07:59.431671    7356 out.go:348] isatty.IsTerminal(1) = true
I0207 14:07:59.431674    7356 out.go:309] Setting ErrFile to fd 2...
I0207 14:07:59.431676    7356 out.go:348] isatty.IsTerminal(2) = true
I0207 14:07:59.431748    7356 root.go:334] Updating PATH: /home/ubuntu/.minikube/bin
W0207 14:07:59.431811    7356 root.go:311] Error reading config file at /home/ubuntu/.minikube/config/config.json: open /home/ubuntu/.minikube/config/config.json: no such file or directory
I0207 14:07:59.431997    7356 out.go:303] Setting JSON to false
I0207 14:07:59.432577    7356 start.go:125] hostinfo: {"hostname":"minikube","uptime":73,"bootTime":1675778806,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-137-generic","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"344cc164-64c1-4eb9-84a1-aeecaf0f912d"}
I0207 14:07:59.432612    7356 start.go:135] virtualization:  guest
I0207 14:07:59.435041    7356 out.go:177] 😄  minikube v1.29.0 on Ubuntu 20.04 (arm64)
W0207 14:07:59.436271    7356 preload.go:295] Failed to list preload files: open /home/ubuntu/.minikube/cache/preloaded-tarball: no such file or directory
I0207 14:07:59.436315    7356 notify.go:220] Checking for updates...
I0207 14:07:59.436324    7356 driver.go:365] Setting default libvirt URI to qemu:///system
I0207 14:07:59.440900    7356 out.go:177] ✨  Using the none driver based on user configuration
I0207 14:07:59.442671    7356 start.go:296] selected driver: none
I0207 14:07:59.442680    7356 start.go:857] validating driver "none" against <nil>
I0207 14:07:59.442687    7356 start.go:868] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0207 14:07:59.442709    7356 start.go:1617] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
I0207 14:07:59.442800    7356 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
I0207 14:07:59.443024    7356 start_flags.go:386] Using suggested 2400MB memory alloc based on sys=9936MB, container=0MB
I0207 14:07:59.443103    7356 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
I0207 14:07:59.443117    7356 cni.go:84] Creating CNI manager for ""
I0207 14:07:59.443122    7356 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0207 14:07:59.443127    7356 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2400 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.15 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:service-node-port-range Value:30000-39999} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/ubuntu:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0207 14:07:59.445700    7356 out.go:177] 👍  Starting control plane node minikube in cluster minikube
I0207 14:07:59.447563    7356 profile.go:148] Saving config to /home/ubuntu/.minikube/profiles/minikube/config.json ...
I0207 14:07:59.447640    7356 lock.go:35] WriteFile acquiring /home/ubuntu/.minikube/profiles/minikube/config.json: {Name:mk2516460eb1eb21ae2c39167b7eabd80c6fb4eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0207 14:07:59.447846    7356 cache.go:193] Successfully downloaded all kic artifacts
I0207 14:07:59.447871    7356 start.go:364] acquiring machines lock for minikube: {Name:mk827fcceb822ddd434756f789a161c4dd798db6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0207 14:07:59.447909    7356 start.go:368] acquired machines lock for "minikube" in 32.717µs
I0207 14:07:59.447922    7356 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2400 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.15 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:service-node-port-range Value:30000-39999} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.20.15 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/ubuntu:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m01 IP: Port:8443 KubernetesVersion:v1.20.15 ContainerRuntime:docker ControlPlane:true Worker:true}
I0207 14:07:59.447952    7356 start.go:125] createHost starting for "m01" (driver="none")
I0207 14:07:59.449605    7356 out.go:177] 🤹  Running on localhost (CPUs=6, Memory=9936MB, Disk=99048MB) ...
I0207 14:07:59.450847    7356 exec_runner.go:51] Run: systemctl --version
I0207 14:07:59.451938    7356 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0207 14:07:59.451963    7356 client.go:168] LocalClient.Create starting
I0207 14:07:59.452019    7356 main.go:141] libmachine: Creating CA: /home/ubuntu/.minikube/certs/ca.pem
I0207 14:07:59.536787    7356 main.go:141] libmachine: Creating client certificate: /home/ubuntu/.minikube/certs/cert.pem
I0207 14:07:59.720294    7356 client.go:171] LocalClient.Create took 268.321291ms
I0207 14:07:59.720322    7356 start.go:167] duration metric: libmachine.API.Create for "minikube" took 268.38418ms
I0207 14:07:59.720328    7356 start.go:300] post-start starting for "minikube" (driver="none")
I0207 14:07:59.720333    7356 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0207 14:07:59.720355    7356 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0207 14:07:59.723670    7356 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0207 14:07:59.723682    7356 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0207 14:07:59.723686    7356 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0207 14:07:59.726061    7356 out.go:177] ℹ️  OS release is Ubuntu 20.04.5 LTS
I0207 14:07:59.727607    7356 filesync.go:126] Scanning /home/ubuntu/.minikube/addons for local assets ...
I0207 14:07:59.727633    7356 filesync.go:126] Scanning /home/ubuntu/.minikube/files for local assets ...
I0207 14:07:59.727642    7356 start.go:303] post-start completed in 7.312154ms
I0207 14:07:59.728042    7356 profile.go:148] Saving config to /home/ubuntu/.minikube/profiles/minikube/config.json ...
I0207 14:07:59.728106    7356 start.go:128] duration metric: createHost completed in 280.151348ms
I0207 14:07:59.728109    7356 start.go:83] releasing machines lock for "minikube", held for 280.196793ms
I0207 14:07:59.728267    7356 exec_runner.go:51] Run: cat /version.json
I0207 14:07:59.728523    7356 exec_runner.go:51] Run: curl -sS -m 2 https://k8s.gcr.io/
W0207 14:07:59.728845    7356 start.go:396] Unable to open version.json: cat /version.json: exit status 1
stdout:

stderr:
cat: /version.json: No such file or directory
I0207 14:07:59.728869    7356 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0207 14:07:59.729903    7356 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0207 14:07:59.729932    7356 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
E0207 14:07:59.733709    7356 start.go:415] unable to disable preinstalled bridge CNI(s): failed to configure non-podman bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;: exit status 1
stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory
I0207 14:07:59.733759    7356 start.go:483] detecting cgroup driver to use...
I0207 14:07:59.733779    7356 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0207 14:07:59.733838    7356 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0207 14:07:59.741250    7356 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0207 14:07:59.745169    7356 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0207 14:07:59.748590    7356 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0207 14:07:59.748612    7356 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0207 14:07:59.751707    7356 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0207 14:07:59.754850    7356 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0207 14:07:59.758033    7356 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0207 14:07:59.761288    7356 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0207 14:07:59.764027    7356 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0207 14:07:59.767081    7356 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0207 14:07:59.771753    7356 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0207 14:07:59.774404    7356 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0207 14:07:59.955673    7356 exec_runner.go:51] Run: sudo systemctl restart containerd
I0207 14:07:59.995675    7356 start.go:483] detecting cgroup driver to use...
I0207 14:07:59.995700    7356 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0207 14:07:59.995785    7356 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0207 14:08:00.007723    7356 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0207 14:08:00.181902    7356 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0207 14:08:00.357515    7356 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0207 14:08:00.357538    7356 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (144 bytes)
I0207 14:08:00.357619    7356 exec_runner.go:51] Run: sudo cp -a /tmp/minikube27319471 /etc/docker/daemon.json
I0207 14:08:00.363661    7356 out.go:177] 
W0207 14:08:00.364901    7356 out.go:239] ❌  Exiting due to RUNTIME_ENABLE: error copying tempfile /tmp/minikube27319471 to dst /etc/docker/daemon.json: sudo cp -a /tmp/minikube27319471 /etc/docker/daemon.json: exit status 1
stdout:

stderr:
cp: cannot create regular file '/etc/docker/daemon.json': No such file or directory

W0207 14:08:00.364929    7356 out.go:239] 
W0207 14:08:00.365644    7356 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    😿  If the above advice does not help, please let us know:                             �[31m│�[0m
�[31m│�[0m    👉  https://github.com/kubernetes/minikube/issues/new/choose                           �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m│�[0m    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    �[31m│�[0m
�[31m│�[0m                                                                                           �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0207 14:08:00.368451    7356 out.go:177] 

Operating System

macOS (Default)

Driver

N/A

@Cirrus-8691
Copy link

Cirrus-8691 commented Feb 14, 2023

see #4172
Just start your minikube with previous version of kubernets. Sample:

$ sudo minikube start --kubernetes-version=v1.25.6 --driver=none

@simondrake
Copy link
Author

This was seen using version 1.20.15 and even 1.18 so dropping to a previous version doesn't fix it.

What did fix it was just making the directory and touching the file.

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Mar 31, 2023
@afbjorklund
Copy link
Collaborator

What did fix it was just making the directory and touching the file.

Minikube should survive the /etc/cni/net.d being empty or nonexistant

@prakash962930
Copy link

ubuntu@ip-172-31-41-176:~$ minikube start --driver=none

  • minikube v1.30.1 on Ubuntu 22.04 (xen/amd64)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Updating the running none "minikube" bare metal machine ...
  • OS release is Ubuntu 22.04.2 LTS
    E0408 01:38:52.709498 136141 start.go:413] unable to disable preinstalled bridge CNI(s): failed to disable all bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name bridge -or -name podman ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;: exit status 1
    stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory

  • Preparing Kubernetes v1.26.3 on Docker 23.0.3 ...
    • kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

@AleFraMa
Copy link

Hi @prakash962930 ,

Did you solve this error?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants