Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Process exited with status 137 from signal KILL" during minikube start #4248

Closed
apupier opened this issue May 13, 2019 · 1 comment
Closed
Labels
co/hyperv HyperV related issues

Comments

@apupier
Copy link

apupier commented May 13, 2019

The exact command to reproduce the issue:

not sure it is easily reproducible as it was working previously but here is the command I used:

minikube start --vm-driver hyperv --hyperv-virtual-switch "ExternalVirtualSwitch" --docker-opt userland-proxy=false

The full output of the command that failed:

o   minikube v1.0.1 on windows (amd64)
@   Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
$   Downloading Kubernetes v1.14.1 images in the background ...
>   Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
-   "minikube" IP address is 192.168.1.24
-   Configuring Docker as the container runtime ...
    - opt userland-proxy=false
-   Version of container runtime is 18.06.3-ce
:   Waiting for image downloads to complete ...
-   Preparing Kubernetes environment ...
@   Downloading kubelet v1.14.1
@   Downloading kubeadm v1.14.1
-   Pulling images required by Kubernetes v1.14.1 ...
-   Launching Kubernetes v1.14.1 using kubeadm ...
E0513 13:12:07.075583   12960 logs.go:155] Failed to list containers for "kube-apiserver": Process exited with status 137 from signal KILL
E0513 13:12:07.106345   12960 logs.go:155] Failed to list containers for "coredns": Process exited with status 137 from signal KILL
E0513 13:12:07.169177   12960 logs.go:155] Failed to list containers for "kube-scheduler": EOF
E0513 13:12:07.169177   12960 logs.go:155] Failed to list containers for "kube-proxy": NewSession: EOF
E0513 13:12:07.171181   12960 logs.go:155] Failed to list containers for "kube-addon-manager": NewSession: EOF
E0513 13:12:07.172178   12960 logs.go:155] Failed to list containers for "kubernetes-dashboard": NewSession: EOF
E0513 13:12:07.172178   12960 logs.go:155] Failed to list containers for "storage-provisioner": NewSession: EOF

!   Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.24 127.0.0.1 ::1]
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

: Process exited with status 137 from signal KILL

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

The output of the minikube logs command:

minikube logs

!   command runner
X   Error:         [VM_IP_NOT_FOUND] getting ssh client for bootstrapper: Error creating new ssh host from driver: Error getting ssh host name for driver: IP not found
i   Advice:        The minikube VM is offline. Please run 'minikube start' to start it again.
-   Related issues:
    - https://github.com/kubernetes/minikube/issues/3849
    - https://github.com/kubernetes/minikube/issues/3648

*   If the above advice does not help, please let us know:
-   https://github.com/kubernetes/minikube/issues/new

The operating system version:

Windows 10
minikube version: v1.0.1

@tstromberg
Copy link
Contributor

Duplicate of #1766

@tstromberg tstromberg marked this as a duplicate of #1766 May 14, 2019
@tstromberg tstromberg added co/hyperkit Hyperkit related issues co/hyperv HyperV related issues and removed co/hyperkit Hyperkit related issues labels May 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperv HyperV related issues
Projects
None yet
Development

No branches or pull requests

2 participants