Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube addons enable ingress #16316

Closed
github-yanger opened this issue Apr 13, 2023 · 4 comments
Closed

minikube addons enable ingress #16316

github-yanger opened this issue Apr 13, 2023 · 4 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@github-yanger
Copy link

重现问题所需的命令:minikube addons enable ingress^

失败的命令的完整输出

  • ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
    You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    • Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
    • Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
    • Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
  • Verifying ingress addon...

X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
*

minikube logs命令的输出


I0413 22:31:18.945490 8956 out.go:177]
Log file created at: 2023/04/13 23:23:17
Running on machine: Yang
Binary: Built with gc go1.20.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0413 23:23:17.459440 22912 out.go:296] Setting OutFile to fd 92 ...
I0413 23:23:17.463446 22912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0413 23:23:17.463446 22912 out.go:309] Setting ErrFile to fd 96...
I0413 23:23:17.463446 22912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0413 23:23:17.474778 22912 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
I0413 23:23:17.478408 22912 config.go:182] Loaded profile config "minikube": Driver=virtualbox, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0413 23:23:17.478408 22912 addons.go:66] Setting ingress=true in profile "minikube"
I0413 23:23:17.478408 22912 addons.go:228] Setting addon ingress=true in "minikube"
I0413 23:23:17.478408 22912 host.go:66] Checking if "minikube" exists ...
I0413 23:23:17.478923 22912 main.go:141] libmachine: COMMAND: D:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0413 23:23:17.513071 22912 main.go:141] libmachine: STDOUT:
{
name="minikube"
Encryption: disabled
groups="/"
ostype="Linux 2.6 / 3.x / 4.x / 5.x (64-bit)"
UUID="3c55bd63-f5e0-4fba-9ef1-c693cd9a8d31"
CfgFile="C:\Users\30445\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\30445\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\30445\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="3c55bd63-f5e0-4fba-9ef1-c693cd9a8d31"
memory=4000
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
BIOS NVRAM File="C:\Users\30445\.minikube\machines\minikube\minikube\minikube.nvram"
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
virtvmsavevmload="on"
iommu="none"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2023-04-13T15:03:33.080000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\30445\.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="4d1b7562-bed8-46ea-b4bc-ab053334f438"
"SATA-tempeject-0-0"="off"
"SATA-IsEjected-0-0"="off"
"SATA-hot-pluggable-0-0"="off"
"SATA-nonrotational-0-0"="off"
"SATA-discard-0-0"="off"
"SATA-1-0"="C:\Users\30445\.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="f9ec7a30-820c-434d-ae9c-e0387dd3e999"
"SATA-hot-pluggable-1-0"="off"
"SATA-nonrotational-1-0"="off"
"SATA-discard-1-0"="off"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027920E36"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,12342,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #2"
macaddress2="0800272E69B7"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="default"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
recording_enabled="off"
recording_screens=1
rec_screen0
rec_screen_enabled="on"
rec_screen_id=0
rec_screen_video_enabled="on"
rec_screen_audio_enabled="off"
rec_screen_dest="File"
rec_screen_dest_filename="C:\Users\30445\.minikube\machines\minikube\minikube\minikube-screen0.webm"
rec_screen_opts="vc_enabled=true,ac_enabled=false,ac_profile=med"
rec_screen_video_res_xy="1024x768"
rec_screen_video_rate_kbps=512
rec_screen_video_fps=25
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="6.0.0 r127566"
GuestAdditionsFacility_VirtualBox Base Driver=50,1681398240474
GuestAdditionsFacility_VirtualBox System Service=50,1681398241050
GuestAdditionsFacility_Seamless Mode=0,1681398240472
GuestAdditionsFacility_Graphics Mode=0,1681398240472
}
I0413 23:23:17.513071 22912 main.go:141] libmachine: STDERR:
{
}
I0413 23:23:17.515953 22912 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
I0413 23:23:17.521139 22912 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
I0413 23:23:17.527288 22912 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
I0413 23:23:17.530145 22912 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0413 23:23:17.530145 22912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
I0413 23:23:17.530145 22912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:12342 SSHKeyPath:C:\Users\30445.minikube\machines\minikube\id_rsa Username:docker}
I0413 23:23:17.629516 22912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0413 23:23:19.298808 22912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.6692921s)
I0413 23:23:19.298808 22912 addons.go:464] Verifying addon ingress=true in "minikube"
I0413 23:23:19.301500 22912 out.go:177] * Verifying ingress addon...
I0413 23:23:19.306004 22912 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0413 23:29:18.833434 22912 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: []
I0413 23:29:19.333689 22912 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: []
I0413 23:29:19.340191 22912 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: []
I0413 23:29:19.340191 22912 kapi.go:107] duration metric: took 6m0.0341144s to wait for app.kubernetes.io/name=ingress-nginx ...
I0413 23:29:19.345962 22912 out.go:177]
W0413 23:29:19.349221 22912 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
W0413 23:29:19.349221 22912 out.go:239] *
W0413 23:29:19.355000 22912 out.go:239]

使用的操作系统版本:版本 Windows 11 家庭中文版
版本 21H2
安装日期 ‎2022/‎5/‎16
操作系统版本 22000.1455
体验 Windows 功能体验包 1000.22000.1455.0

@github-yanger github-yanger added the l/zh-CN Issues in or relating to Chinese label Apr 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants