Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP proxy settings not supported for containerd #15596

Closed
cvila84 opened this issue Jan 5, 2023 · 6 comments
Closed

HTTP proxy settings not supported for containerd #15596

cvila84 opened this issue Jan 5, 2023 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cvila84
Copy link

cvila84 commented Jan 5, 2023

What Happened?

During our tests to replace docker by containerd as container runtime, we can't start a minikube VM behind a HTTP proxy. It seems the settings are not propagated in this case :

switch c.KubernetesConfig.ContainerRuntime {
case "crio", "cri-o":
return setCrioOptions(p)
case "containerd":
return nil
default:
_, err := p.GenerateDockerOptions(engine.DefaultPort)
return err
}

K8S images cannot be pulled during kubeadm as the direct connection times out

I0105 17:08:41.263482   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.6: output: time="2023-01-05T16:08:41Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to resolve reference \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to do request: Head \"https://k8s.gcr.io/v2/coredns/coredns/manifests/v1.8.6\": dial tcp 64.233.167.82:443: i/o timeout"

Attach the log file

I0105 16:52:53.299154   28016 main.go:134] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0105 16:52:53.364294   28016 main.go:134] libmachine: STDOUT:
{
name="minikube"
Encryption:     disabled
groups="/"
ostype="Linux 2.6 / 3.x / 4.x / 5.x (64-bit)"
UUID="82bfa41a-d3fe-4c83-bf6e-54fca9eabb61"
CfgFile="C:\\Users\\cvila\\.minikube\\machines\\minikube\\minikube\\minikube.vbox"
SnapFldr="C:\\Users\\cvila\\.minikube\\machines\\minikube\\minikube\\Snapshots"
LogFldr="C:\\Users\\cvila\\.minikube\\machines\\minikube\\minikube\\Logs"
hardwareuuid="82bfa41a-d3fe-4c83-bf6e-54fca9eabb61"
memory=12288
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=6
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
BIOS NVRAM File="C:\\Users\\cvila\\.minikube\\machines\\minikube\\minikube\\minikube.nvram"
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
virtvmsavevmload="on"
iommu="none"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2023-01-05T15:52:20.225000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\\Users\\cvila\\.minikube\\machines\\minikube\\boot2docker.iso"
"SATA-ImageUUID-0-0"="69d81d0f-e337-426f-a578-3e69b0959477"
"SATA-tempeject-0-0"="off"
"SATA-IsEjected-0-0"="off"
"SATA-hot-pluggable-0-0"="off"
"SATA-nonrotational-0-0"="off"
"SATA-discard-0-0"="off"
"SATA-1-0"="C:\\Users\\cvila\\.minikube\\machines\\minikube\\disk.vmdk"
"SATA-ImageUUID-1-0"="340506b6-bdb5-4947-80b3-9806efa062e2"
"SATA-hot-pluggable-1-0"="off"
"SATA-nonrotational-1-0"="off"
"SATA-discard-1-0"="off"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027824671"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51504,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #2"
macaddress2="08002771EAF6"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="default"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\\\?\\c:\\Users"
VRDEActiveConnection="off"
VRDEClients==0
recording_enabled="off"
recording_screens=1
 rec_screen0
rec_screen_enabled="on"
rec_screen_id=0
rec_screen_video_enabled="on"
rec_screen_audio_enabled="off"
rec_screen_dest="File"
rec_screen_dest_filename="C:\\Users\\cvila\\.minikube\\machines\\minikube\\minikube\\minikube-screen0.webm"
rec_screen_opts="vc_enabled=true,ac_enabled=false,ac_profile=med"
rec_screen_video_res_xy="1024x768"
rec_screen_video_rate_kbps=512
rec_screen_video_fps=25
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="6.0.0 r127566"
GuestAdditionsFacility_VirtualBox Base Driver=50,1672933964758
GuestAdditionsFacility_VirtualBox System Service=50,1672933965038
GuestAdditionsFacility_Seamless Mode=0,1672933964756
GuestAdditionsFacility_Graphics Mode=0,1672933964756
}
I0105 16:52:53.364294   28016 main.go:134] libmachine: STDERR:
{
}
I0105 16:52:53.364294   28016 main.go:134] libmachine: Host-only MAC: 08002771eaf6

I0105 16:52:53.391007   28016 main.go:134] libmachine: SSH binary not found, using native Go implementation
I0105 16:52:53.391575   28016 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x8ec080] 0x8ef000 <nil>  [] 0s} 127.0.0.1 51504 <nil> <nil>}
I0105 16:52:53.391575   28016 main.go:134] libmachine: About to run SSH command:
ip addr show
I0105 16:52:53.500422   28016 main.go:134] libmachine: SSH cmd err, output: <nil>: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:82:46:71 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86392sec preferred_lft 86392sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:71:ea:f6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 592sec preferred_lft 592sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

I0105 16:52:53.500422   28016 main.go:134] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:82:46:71 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86392sec preferred_lft 86392sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:71:ea:f6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 592sec preferred_lft 592sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

END SSH

I0105 16:52:53.504562   28016 out.go:177] 🌐  Options de réseau trouvées :
I0105 16:52:53.505788   28016 out.go:177]     ▪ HTTP_PROXY=http://192.168.1.3:3128
I0105 16:52:53.506822   28016 out.go:177]     ▪ HTTPS_PROXY=http://192.168.1.3:3128
I0105 16:52:53.507342   28016 out.go:177]     ▪ NO_PROXY=minikube,.minikube,.gemalto.com,.thales,.local,kubernetes.docker.internal,192.168.99.100,.192.168.99.100.nip.io,10.4.223.40,127.0.0.1,192.168.99.1
I0105 16:52:53.508423   28016 out.go:177]     ▪ http_proxy=http://192.168.1.3:3128
I0105 16:52:53.509462   28016 out.go:177]     ▪ https_proxy=http://192.168.1.3:3128
I0105 16:52:53.509980   28016 out.go:177]     ▪ no_proxy=minikube,.minikube,.gemalto.com,.thales,.local,kubernetes.docker.internal,192.168.99.100,.192.168.99.100.nip.io,10.4.223.40,127.0.0.1,192.168.99.1
W0105 16:52:56.512719   28016 start.go:701] dial failed (will retry): dial tcp 192.168.99.100:22: i/o timeout
I0105 16:52:56.512719   28016 retry.go:31] will retry after 1.104660288s: dial tcp 192.168.99.100:22: i/o timeout
W0105 16:53:00.618480   28016 start.go:701] dial failed (will retry): dial tcp 192.168.99.100:22: i/o timeout
I0105 16:53:00.618480   28016 retry.go:31] will retry after 2.160763633s: dial tcp 192.168.99.100:22: i/o timeout
I0105 16:53:02.786728   28016 ssh_runner.go:195] Run: curl -x http://192.168.1.3:3128 -sS -m 2 https://k8s.gcr.io/
I0105 16:53:02.786728   28016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51504 SSHKeyPath:C:\Users\cvila\.minikube\machines\minikube\id_rsa Username:docker}
I0105 16:53:02.809487   28016 ssh_runner.go:195] Run: systemctl --version
I0105 16:53:02.809991   28016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51504 SSHKeyPath:C:\Users\cvila\.minikube\machines\minikube\id_rsa Username:docker}
I0105 16:53:02.922743   28016 preload.go:132] Checking if preload exists for k8s version v1.24.8 and runtime containerd
I0105 16:53:02.949974   28016 ssh_runner.go:195] Run: sudo crictl images --output json
I0105 16:53:06.989733   28016 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.0397559s)
I0105 16:53:06.989733   28016 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.8". assuming images are not preloaded.
I0105 16:53:07.017538   28016 ssh_runner.go:195] Run: which lz4
I0105 16:53:07.048435   28016 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0105 16:53:07.052357   28016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0105 16:53:07.052357   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.8-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (459137654 bytes)
I0105 16:53:15.178513   28016 containerd.go:496] Took 8.157458 seconds to copy over tarball
I0105 16:53:15.207699   28016 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0105 16:53:19.633870   28016 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.4261676s)
I0105 16:53:19.633870   28016 containerd.go:503] Took 4.455353 seconds t extract the tarball
I0105 16:53:19.633870   28016 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0105 16:53:19.714474   28016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0105 16:53:19.848797   28016 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0105 16:53:19.894781   28016 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0105 16:53:21.129126   28016 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (1.2343442s)
I0105 16:53:21.158191   28016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0105 16:53:21.170947   28016 docker.go:189] disabling docker service ...
I0105 16:53:21.199426   28016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0105 16:53:21.241118   28016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0105 16:53:21.279762   28016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0105 16:53:21.415146   28016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0105 16:53:21.553446   28016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0105 16:53:21.567520   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0105 16:53:21.583858   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0105 16:53:21.594608   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0105 16:53:21.606479   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0105 16:53:21.617089   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
I0105 16:53:21.627288   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/certs.d/10.10.0.0/16 && printf %s "c2VydmVyID0gImh0dHA6Ly8xMC4xMC4wLjAvMTYiCgpbaG9zdC4iaHR0cDovLzEwLjEwLjAuMC8xNiJdCiAgc2tpcF92ZXJpZnkgPSB0cnVlCg==" | base64 -d | sudo tee /etc/containerd/certs.d/10.10.0.0/16/hosts.toml"
I0105 16:53:21.674684   28016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0105 16:53:21.687171   28016 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0105 16:53:21.714799   28016 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0105 16:53:21.759532   28016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0105 16:53:21.796727   28016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0105 16:53:21.938680   28016 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0105 16:53:21.964600   28016 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I0105 16:53:21.992236   28016 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0105 16:53:21.996936   28016 retry.go:31] will retry after 1.164560053s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:

stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0105 16:53:23.189217   28016 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0105 16:53:23.193904   28016 start.go:472] Will wait 60s for crictl version
I0105 16:53:23.220355   28016 ssh_runner.go:195] Run: sudo crictl version
I0105 16:53:23.256763   28016 start.go:481] Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.6.8
RuntimeApiVersion:  v1alpha2
I0105 16:53:23.283710   28016 ssh_runner.go:195] Run: containerd --version
I0105 16:53:23.347963   28016 ssh_runner.go:195] Run: containerd --version
I0105 16:53:23.388149   28016 out.go:177] 📦  Préparation de Kubernetes v1.24.8 sur containerd 1.6.8...
I0105 16:53:23.391232   28016 out.go:177]     ▪ env http_proxy=http://192.168.1.3:3128
I0105 16:53:23.395947   28016 out.go:177]     ▪ env https_proxy=http://192.168.1.3:3128
I0105 16:53:23.400674   28016 out.go:177]     ▪ env no_proxy=minikube,.minikube,.gemalto.com,.thales,.local,kubernetes.docker.internal,192.168.99.100,.192.168.99.100.nip.io,10.4.223.40,127.0.0.1,192.168.99.1
I0105 16:53:23.404895   28016 out.go:177]     ▪ env HTTP_PROXY=http://192.168.1.3:3128
I0105 16:53:23.409009   28016 out.go:177]     ▪ env HTTPS_PROXY=http://192.168.1.3:3128
I0105 16:53:23.412166   28016 out.go:177]     ▪ env NO_PROXY=minikube,.minikube,.gemalto.com,.thales,.local,kubernetes.docker.internal,192.168.99.100,.192.168.99.100.nip.io,10.4.223.40,127.0.0.1,192.168.99.1
I0105 16:53:23.607741   28016 ssh_runner.go:195] Run: grep 192.168.99.1	host.minikube.internal$ /etc/hosts
I0105 16:53:23.611926   28016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.99.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0105 16:53:23.624372   28016 preload.go:132] Checking if preload exists for k8s version v1.24.8 and runtime containerd
I0105 16:53:23.652266   28016 ssh_runner.go:195] Run: sudo crictl images --output json
I0105 16:53:23.686325   28016 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/etcd:3.5.3-0". assuming images are not preloaded.
I0105 16:53:23.713909   28016 ssh_runner.go:195] Run: which lz4
I0105 16:53:23.747737   28016 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0105 16:53:23.752258   28016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0105 16:53:23.752258   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.8-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (459137654 bytes)
I0105 16:53:31.707632   28016 containerd.go:496] Took 7.986879 seconds to copy over tarball
I0105 16:53:31.736141   28016 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0105 16:53:34.990644   28016 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2545007s)
I0105 16:53:34.990644   28016 containerd.go:503] Took 3.283009 seconds t extract the tarball
I0105 16:53:34.990644   28016 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0105 16:53:35.074086   28016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0105 16:53:35.216580   28016 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0105 16:53:35.275066   28016 ssh_runner.go:195] Run: sudo crictl images --output json
I0105 16:53:36.328990   28016 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.0539227s)
I0105 16:53:36.329511   28016 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/etcd:3.5.3-0". assuming images are not preloaded.
I0105 16:53:36.329511   28016 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.8 k8s.gcr.io/kube-controller-manager:v1.24.8 k8s.gcr.io/kube-scheduler:v1.24.8 k8s.gcr.io/kube-proxy:v1.24.8 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0105 16:53:36.329511   28016 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.8
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.8
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.8
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0105 16:53:36.329511   28016 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.8
I0105 16:53:36.331076   28016 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.8: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/kube-apiserver:v1.24.8/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331076   28016 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.8: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/kube-proxy:v1.24.8/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331076   28016 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/gcr.io/k8s-minikube/storage-provisioner:v5/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331076   28016 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.8: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/kube-controller-manager:v1.24.8/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331076   28016 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/etcd:3.5.3-0/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331310   28016 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.8: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/kube-scheduler:v1.24.8/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331310   28016 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/pause:3.7/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:36.331310   28016 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/images/k8s.gcr.io/coredns/coredns:v1.8.6/json": open //./pipe/docker_engine: The system cannot find the file specified.
I0105 16:53:37.050205   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.8"
I0105 16:53:37.075811   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.8"
I0105 16:53:37.344181   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.8"
I0105 16:53:37.401632   28016 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.8" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.8" does not exist at hash "a49578203a3c297abb7fd4a545308c2f93c08f614e9e520ed8b1ef334f31289b" in container runtime
I0105 16:53:37.401632   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.24.8 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.24.8
I0105 16:53:37.401632   28016 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.8
I0105 16:53:37.434315   28016 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.8" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.8" does not exist at hash "c7cbaca6e63b40f119d6dcdb42f4b7ec966f2ec93b84d5d78d339c840678cee5" in container runtime
I0105 16:53:37.434315   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.24.8 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.24.8
I0105 16:53:37.434315   28016 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.8
I0105 16:53:37.437243   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I0105 16:53:37.444075   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:37.477953   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:37.621294   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.8"
I0105 16:53:37.628838   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0105 16:53:37.633752   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I0105 16:53:37.743891   28016 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.8" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.8" does not exist at hash "9e2bfc195de6b78c9e30d5e8d11ec84b1ea60632fb894c37547b0c3f46527786" in container runtime
I0105 16:53:37.743891   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.24.8 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.24.8
I0105 16:53:37.743891   28016 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.8
I0105 16:53:37.782251   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:37.822481   28016 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0105 16:53:37.822481   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.3-0 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.3-0
I0105 16:53:37.822481   28016 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I0105 16:53:37.828970   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0105 16:53:37.864158   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.8
I0105 16:53:37.864678   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.8
I0105 16:53:37.868448   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:38.014966   28016 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.8" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.8" does not exist at hash "9efa6dff568f60b9ca0a8843c498c8cfca47e62257720012e96869649928add1" in container runtime
I0105 16:53:38.014966   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.24.8 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.24.8
I0105 16:53:38.014966   28016 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.8
I0105 16:53:38.024784   28016 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0105 16:53:38.024784   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\cvila\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
I0105 16:53:38.024784   28016 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0105 16:53:38.048774   28016 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0105 16:53:38.048774   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.7 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.7
I0105 16:53:38.048774   28016 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I0105 16:53:38.051919   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:38.060344   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:38.084361   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:38.084361   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.8
I0105 16:53:38.115163   28016 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0105 16:53:38.115163   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.24.8
I0105 16:53:38.115163   28016 localpath.go:146] windows sanitize: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
I0105 16:53:38.115163   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.24.8
I0105 16:53:38.115163   28016 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0105 16:53:38.150083   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.24.8
I0105 16:53:38.158163   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.8
I0105 16:53:38.159200   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I0105 16:53:38.160761   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0105 16:53:38.161280   28016 ssh_runner.go:195] Run: which crictl
I0105 16:53:38.184273   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I0105 16:53:38.257367   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.24.8
I0105 16:53:38.257367   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.3-0
I0105 16:53:38.265007   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
I0105 16:53:38.265007   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.7
I0105 16:53:38.291908   28016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0105 16:53:38.344257   28016 cache_images.go:286] Loading image from: C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
I0105 16:53:38.344257   28016 cache_images.go:92] LoadImages completed in 2.0147449s
W0105 16:53:38.344257   28016 out.go:239] ❌  Impossible de charger les images mises en cache : loading cached images: CreateFile C:\Users\cvila\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.24.8: The system cannot find the path specified.
I0105 16:53:38.372058   28016 ssh_runner.go:195] Run: sudo crictl info
I0105 16:53:38.408088   28016 cni.go:95] Creating CNI manager for ""
I0105 16:53:38.408088   28016 cni.go:165] "virtualbox" driver + containerd runtime found, recommending bridge
I0105 16:53:38.408088   28016 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0105 16:53:38.408088   28016 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.99.100 APIServerPort:8443 KubernetesVersion:v1.24.8 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:minikube.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.99.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.99.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0105 16:53:38.408088   28016 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.99.100
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.99.100
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.99.100"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.8
networking:
  dnsDomain: minikube.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "minikube.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I0105 16:53:38.408088   28016 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.8/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=minikube --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.99.100 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.24.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:minikube.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0105 16:53:38.435282   28016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.8
I0105 16:53:38.444930   28016 binaries.go:44] Found k8s binaries, skipping transfer
I0105 16:53:38.471562   28016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0105 16:53:38.480861   28016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (503 bytes)
I0105 16:53:38.497502   28016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0105 16:53:38.514077   28016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
I0105 16:53:38.558514   28016 ssh_runner.go:195] Run: grep 192.168.99.100	control-plane.minikube.internal$ /etc/hosts
I0105 16:53:38.562366   28016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.99.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0105 16:53:38.574196   28016 certs.go:54] Setting up C:\Users\cvila\.minikube\profiles\minikube for IP: 192.168.99.100
I0105 16:53:38.574708   28016 certs.go:182] skipping minikubeCA CA generation: C:\Users\cvila\.minikube\ca.key
I0105 16:53:38.574708   28016 certs.go:182] skipping proxyClientCA CA generation: C:\Users\cvila\.minikube\proxy-client-ca.key
I0105 16:53:38.576500   28016 certs.go:302] generating minikube-user signed cert: C:\Users\cvila\.minikube\profiles\minikube\client.key
I0105 16:53:38.576500   28016 crypto.go:68] Generating cert C:\Users\cvila\.minikube\profiles\minikube\client.crt with IP's: []
I0105 16:53:38.743578   28016 crypto.go:156] Writing cert to C:\Users\cvila\.minikube\profiles\minikube\client.crt ...
I0105 16:53:38.743578   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\client.crt: {Name:mkf83a0e0119fafce76321dbe902f785118ee06b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:38.744554   28016 crypto.go:164] Writing key to C:\Users\cvila\.minikube\profiles\minikube\client.key ...
I0105 16:53:38.744554   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\client.key: {Name:mk22f9aa9ecbb5d5338e3cb3b2f04cdfade9da73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:38.745499   28016 certs.go:302] generating minikube signed cert: C:\Users\cvila\.minikube\profiles\minikube\apiserver.key.914d9d32
I0105 16:53:38.745499   28016 crypto.go:68] Generating cert C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt.914d9d32 with IP's: [192.168.99.100 10.96.0.1 127.0.0.1 10.0.0.1]
I0105 16:53:39.008529   28016 crypto.go:156] Writing cert to C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt.914d9d32 ...
I0105 16:53:39.008529   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt.914d9d32: {Name:mkd8f5969d084dc9b1886f960f0f94ba09937ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:39.008529   28016 crypto.go:164] Writing key to C:\Users\cvila\.minikube\profiles\minikube\apiserver.key.914d9d32 ...
I0105 16:53:39.008529   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\apiserver.key.914d9d32: {Name:mk748d143377067a8d7204d435aad2717f1b64b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:39.009498   28016 certs.go:320] copying C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt.914d9d32 -> C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt
I0105 16:53:39.015354   28016 certs.go:324] copying C:\Users\cvila\.minikube\profiles\minikube\apiserver.key.914d9d32 -> C:\Users\cvila\.minikube\profiles\minikube\apiserver.key
I0105 16:53:39.016328   28016 certs.go:302] generating aggregator signed cert: C:\Users\cvila\.minikube\profiles\minikube\proxy-client.key
I0105 16:53:39.017531   28016 crypto.go:68] Generating cert C:\Users\cvila\.minikube\profiles\minikube\proxy-client.crt with IP's: []
I0105 16:53:39.317083   28016 crypto.go:156] Writing cert to C:\Users\cvila\.minikube\profiles\minikube\proxy-client.crt ...
I0105 16:53:39.317083   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\proxy-client.crt: {Name:mk2e4d785409a9569059f20f34dbd178ad49edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:39.318049   28016 crypto.go:164] Writing key to C:\Users\cvila\.minikube\profiles\minikube\proxy-client.key ...
I0105 16:53:39.318049   28016 lock.go:35] WriteFile acquiring C:\Users\cvila\.minikube\profiles\minikube\proxy-client.key: {Name:mkd13d15018ee9cb0b8c13b2098633ea196aed44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0105 16:53:39.325857   28016 certs.go:388] found cert: C:\Users\cvila\.minikube\certs\C:\Users\cvila\.minikube\certs\ca-key.pem (1675 bytes)
I0105 16:53:39.326834   28016 certs.go:388] found cert: C:\Users\cvila\.minikube\certs\C:\Users\cvila\.minikube\certs\ca.pem (1074 bytes)
I0105 16:53:39.326834   28016 certs.go:388] found cert: C:\Users\cvila\.minikube\certs\C:\Users\cvila\.minikube\certs\cert.pem (1119 bytes)
I0105 16:53:39.326834   28016 certs.go:388] found cert: C:\Users\cvila\.minikube\certs\C:\Users\cvila\.minikube\certs\key.pem (1675 bytes)
I0105 16:53:39.328101   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0105 16:53:39.355577   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0105 16:53:39.381525   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0105 16:53:39.407659   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0105 16:53:39.434785   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0105 16:53:39.460895   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0105 16:53:39.487569   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0105 16:53:39.515635   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0105 16:53:39.538034   28016 ssh_runner.go:362] scp C:\Users\cvila\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0105 16:53:39.562603   28016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0105 16:53:39.605478   28016 ssh_runner.go:195] Run: openssl version
I0105 16:53:39.637205   28016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0105 16:53:39.681555   28016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0105 16:53:39.686626   28016 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  2 09:58 /usr/share/ca-certificates/minikubeCA.pem
I0105 16:53:39.715436   28016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0105 16:53:39.748610   28016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0105 16:53:39.763344   28016 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:12288 CPUs:6 DiskSize:20480 VMDriver: Driver:virtualbox HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[http_proxy=http://192.168.1.3:3128 https_proxy=http://192.168.1.3:3128 no_proxy=minikube,.minikube,.gemalto.com,.thales,.local,kubernetes.docker.internal,192.168.99.100,.192.168.99.100.nip.io,10.4.223.40,127.0.0.1,192.168.99.1] ContainerVolumeMounts:[] InsecureRegistry:[10.10.0.0/16] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:false HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.8 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:minikube.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.99.100 Port:8443 KubernetesVersion:v1.24.8 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\cvila:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0105 16:53:39.763344   28016 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0105 16:53:39.790754   28016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0105 16:53:39.826875   28016 cri.go:87] found id: ""
I0105 16:53:39.854165   28016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0105 16:53:39.890931   28016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0105 16:53:39.929198   28016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0105 16:53:39.938625   28016 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0105 16:53:39.938625   28016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0105 16:53:39.991879   28016 kubeadm.go:317] W0105 15:53:39.990558    1157 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0105 16:53:40.135584   28016 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0105 17:08:41.258755   28016 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I0105 17:08:41.259732   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.8: output: time="2023-01-05T15:56:10Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.24.8\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.24.8\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.8\": dial tcp 64.233.166.82:443: i/o timeout"
I0105 17:08:41.259732   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.260871   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.8: output: time="2023-01-05T15:58:40Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/kube-controller-manager:v1.24.8\": failed to resolve reference \"k8s.gcr.io/kube-controller-manager:v1.24.8\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.24.8\": dial tcp 64.233.166.82:443: i/o timeout"
I0105 17:08:41.260871   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.261396   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.8: output: time="2023-01-05T16:01:10Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/kube-scheduler:v1.24.8\": failed to resolve reference \"k8s.gcr.io/kube-scheduler:v1.24.8\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.24.8\": dial tcp 64.233.166.82:443: i/o timeout"
I0105 17:08:41.261396   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.261917   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.24.8: output: time="2023-01-05T16:03:40Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.24.8\": failed to resolve reference \"k8s.gcr.io/kube-proxy:v1.24.8\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-proxy/manifests/v1.24.8\": dial tcp 64.233.166.82:443: i/o timeout"
I0105 17:08:41.261917   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.262441   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.7: output: time="2023-01-05T16:06:11Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/pause:3.7\": failed to resolve reference \"k8s.gcr.io/pause:3.7\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.7\": dial tcp 64.233.166.82:443: i/o timeout"
I0105 17:08:41.262441   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.263482   28016 kubeadm.go:317] 	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.6: output: time="2023-01-05T16:08:41Z" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to resolve reference \"k8s.gcr.io/coredns/coredns:v1.8.6\": failed to do request: Head \"https://k8s.gcr.io/v2/coredns/coredns/manifests/v1.8.6\": dial tcp 64.233.167.82:443: i/o timeout"
I0105 17:08:41.263482   28016 kubeadm.go:317] , error: exit status 1
I0105 17:08:41.263482   28016 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I0105 17:08:41.263482   28016 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0105 17:08:41.263482   28016 kubeadm.go:317] [init] Using Kubernetes version: v1.24.8
I0105 17:08:41.264003   28016 kubeadm.go:317] [preflight] Running pre-flight checks
I0105 17:08:41.264003   28016 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0105 17:08:41.264003   28016 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0105 17:08:41.264003   28016 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0105 17:08:41.264521   28016 out.go:239] 💢  l'initialisation a échoué, va réessayer : kubeadm init timed out in 10 minutes
I0105 17:08:41.264521   28016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.8:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0105 17:08:41.913119   28016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0105 17:08:41.954644   28016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0105 17:08:41.964037   28016 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0105 17:08:41.964037   28016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0105 17:08:42.026525   28016 kubeadm.go:317] W0105 16:08:42.025309    1648 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0105 17:08:42.177966   28016 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

Operating System

Windows

Driver

VirtualBox

@cvila84
Copy link
Author

cvila84 commented Jan 15, 2023

@afbjorklund hello ! do you need other information to consider it as an issue ? thanks !

@cvila84
Copy link
Author

cvila84 commented Mar 2, 2023

For clarity, HTTP proxies are working well with docker as container runtime but are not with containerd (this issue).

IMO, this could become a wider problem as almost everybody will move to containerd sooner or later (because of deprecation).

@afbjorklund what do you think ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 30, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants