Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: reusing node: detecting provisioner: Too many retries waiting for SSH to be available #4132

Closed
cduke-nokia opened this issue Apr 22, 2019 · 32 comments · Fixed by #7244
Closed
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@cduke-nokia
Copy link

cduke-nokia commented Apr 22, 2019

Environment:

minikube version: v1.0.0
OS: Ubuntu 16.04 LTS (Xenial Xerus)
VM Driver: none

What happened: ```
Created a VM with none driver, stopped it, then started it again. The VM failed to start and minikube reported that it crashed.


What I expected to happen: 
the VM created by the first minikube start command is started.

Output from the second minikube start command:

😄  minikube v1.0.0 on linux (amd64)
🤹  Downloading Kubernetes v1.14.0 images in the background ...
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing none VM for "minikube" ...
⌛  Waiting for SSH access ...

💣  Unable to start VM: detecting provisioner: Too many retries waiting for SSH to be available.  Last error: Maximum number of  
retries (60) exceeded

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new


Output from 'sudo minikube start --alsologtostderr -v=8 --vm-driver=none':

⌛  Waiting for SSH access ...
Waiting for SSH to be available...
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands

To reproduce:
sudo minikube start --vm-driver=none
sudo minikube stop
sudo minikube start --vm-driver=none

Starting a stopped VM was working in minikube v0.28.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 23, 2019

Hmm, why is it trying to SSH to itself ? Doesn't make sense.

It's not really creating a new VM, that is supplied by the user.

@tstromberg tstromberg changed the title minikube fails to start a stopped VM with none driver none: Unable to start VM: detecting provisioner: Too many retries waiting for SSH to be available Apr 25, 2019
@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Apr 25, 2019
@tstromberg
Copy link
Contributor

We test this sequence in test/integration/start_stop_delete_test.go - so I'm curious what's going on here that is different than our test environment.

@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Apr 25, 2019
@cduke-nokia
Copy link
Author

Questioning: 'We test this sequence'
func TestStartStop in test/integration/start_stop_delete_test.go has the code

if !strings.Contains(test.name, "docker") && usingNoneDriver(r) { t.Skipf("skipping %s - incompatible with none driver", test.name) }

The test names in the func are: nocache_oldest, feature_gates_newest_cni, containerd_and_non_default_apiserver_port, crio_ignore_preflights. None contain 'docker'. This seems to indicate that no startStop tests with noneDriver are performed.

Addressing: 'why is it SSHing to itself'
Sequence of code execution, as indicated by the output:

  1. The func startHost in pkg/minikube/cluster/cluster.go logs "Restarting existing VM"
  2. The func startHost logs "Waiting for SSH access"
  3. The func startHost calls provision.DetectProvisioner
  4. The func DetectProvisioner in minikube/vendor/github.com/docker/machine/libmachine/provision/provisioner.go logs the line "Waiting for SSH to be available"
  5. The func DetectProvisioner invokes drivers.WaitForSSH
  6. The func WaitForSSH in minikube/vendor/github.com/docker/machine/libmachine/drivers/utils.go calls calls WaitFor with sshAvailableFunc
  7. The func sshAvailableFunc in minikube/vendor/github.com/docker/machine/libmachine/drivers/utils.go logs "Getting to WaitForSSH function"
  8. The func sshAvailableFunc calls RunSSHCommandFromDriver
  9. The func RunSSHCommandFromDriver in minikube/pkg/drivers/none/none returns fmt.Errorf("driver does not support ssh commands")
  10. The func sshAvailableFunc logs "Error getting ssh command 'exit 0' : %s"
  11. The func WaitFor returns "Maximum number of retries (%d) exceeded"
  12. The func WaitForSSH logs "Too many retries waiting for SSH to be available. Last error: %s"

The pull request for #3387 added the DetectProvisioner invocation into startHost. DetectProvisioner runs SSH commands. The none driver doesn't support SSH commands.

@elsbrock
Copy link

I am having the same issue with v1.0.0

@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 24, 2019
@medyagh
Copy link
Member

medyagh commented Jun 2, 2019

I confirm this issue:
for the record, I tried with both setting proxy and no proxy, and same behaviour.

to reproduce:

  • start minikube with none driver ( successful)
    minikube start --vm-driver none --alsologtostderr -v=8

  • run the start command again ( Fails with WaitForSSH Error)
    minikube start --vm-driver none --alsologtostderr -v=8

minikube output

# out/minikube start --vm-driver none --alsologtostderr -v=8
I0601 18:54:16.906373   48577 notify.go:128] Checking for updates...
😄  minikube v1.1.0 on linux (amd64)
I0601 18:54:17.116658   48577 start.go:721] Saving config:
{
    "MachineConfig": {
        "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.1.0.iso",
        "Memory": 2048,
        "CPUs": 2,
        "DiskSize": 20000,
        "VMDriver": "none",
        "ContainerRuntime": "docker",
        "HyperkitVpnKitSock": "",
        "HyperkitVSockPorts": [],
        "XhyveDiskDriver": "ahci-hd",
        "DockerEnv": [
            "NO_PROXY=egressproxy.corp.google.com:3128"
        ],
        "InsecureRegistry": null,
        "RegistryMirror": null,
        "HostOnlyCIDR": "192.168.99.1/24",
        "HypervVirtualSwitch": "",
        "KvmNetwork": "default",
        "DockerOpt": null,
        "DisableDriverMounts": false,
        "NFSShare": [],
        "NFSSharesRoot": "/nfsshares",
        "UUID": "",
        "GPU": false,
        "Hidden": false,
        "NoVTXCheck": false
    },
    "KubernetesConfig": {
        "KubernetesVersion": "v1.14.2",
        "NodeIP": "",
        "NodePort": 8443,
        "NodeName": "minikube",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "docker",
        "CRISocket": "",
        "NetworkPlugin": "",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "ExtraOptions": null,
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": false
    }
}
I0601 18:54:17.117169   48577 cluster.go:96] Skipping create...Using existing machine configuration
I0601 18:54:17.118184   48577 interface.go:360] Looking for default routes with IPv4 addresses
I0601 18:54:17.118210   48577 interface.go:365] Default route transits interface "eno1"
I0601 18:54:17.118636   48577 interface.go:174] Interface eno1 is up
I0601 18:54:17.118770   48577 interface.go:222] Interface "eno1" has 3 addresses :[172.31.120.180/23 2620:0:1002:14:7bd8:8105:f650:d2d3/64 fe80::124:34fe:f3ff:c433/64].
I0601 18:54:17.118810   48577 interface.go:189] Checking addr  172.31.120.180/23.
I0601 18:54:17.118829   48577 interface.go:196] IP found 172.31.120.180
I0601 18:54:17.118847   48577 interface.go:228] Found valid IPv4 address 172.31.120.180 for interface "eno1".
I0601 18:54:17.118865   48577 interface.go:371] Found active IP 172.31.120.180 
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I0601 18:54:17.119015   48577 none.go:231] checking for running kubelet ...
I0601 18:54:17.119050   48577 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet
I0601 18:54:17.127274   48577 cluster.go:123] Machine state:  Running
🏃  Re-using the currently running none VM for "minikube" ...
I0601 18:54:17.127380   48577 cluster.go:141] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[NO_PROXY=egressproxy.corp.google.com:3128] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:}
⌛  Waiting for SSH access ...
Waiting for SSH to be available...
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands



minikube logs

# out/minikube logs
==> coredns <==
.:53
2019-06-02T01:56:57.409Z [INFO] CoreDNS-1.3.1
2019-06-02T01:56:57.409Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-02T01:56:57.409Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
2019-06-02T01:56:57.410Z [FATAL] plugin/loop: Loop (127.0.0.1:32923 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 5311714145885294730.7878336323304594964."

==> dmesg <==
[May15 10:35] acpi PNP0C14:01: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.000095] acpi PNP0C14:02: duplicate WMI GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 (first instance was on PNP0C14:00)
[  +0.000002] acpi PNP0C14:02: duplicate WMI GUID 41227C2D-80E1-423F-8B8E-87E32755A0EB (first instance was on PNP0C14:00)
[  +0.040638] usb: port power management may be unreliable
[ +19.845811] systemd[1]: Another IMA custom policy has already been loaded, ignoring: No such file or directory
[  +0.161663] systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[  +0.082227] hpuefi: loading out-of-tree module taints kernel.
[  +0.178870] nvidia: module license 'NVIDIA' taints kernel.
[  +0.000001] Disabling lock debugging due to kernel taint
[  +0.011583] ACPI Error: Needed [Buffer/String/Package], found [Integer] 00000000c107a124 (20180810/exresop-560)
[  +0.000007] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20180810/dswexec-427)
[  +0.000005] ACPI Error: Method parse/execution failed \_SB.WMIV.WVPO, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.000006] ACPI Error: Method parse/execution failed \_SB.WMIV.WMPV, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.003164] ACPI Error: Needed [Buffer/String/Package], found [Integer] 000000005166f766 (20180810/exresop-560)
[  +0.000073] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20180810/dswexec-427)
[  +0.000005] ACPI Error: Method parse/execution failed \_SB.WMIV.WVPO, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.000006] ACPI Error: Method parse/execution failed \_SB.WMIV.WMPV, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.011004] ACPI Error: Needed [Buffer/String/Package], found [Integer] 000000009da9cdf4 (20180810/exresop-560)
[  +0.000008] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20180810/dswexec-427)
[  +0.000004] ACPI Error: Method parse/execution failed \_SB.WMIV.WVPO, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.000006] ACPI Error: Method parse/execution failed \_SB.WMIV.WMPV, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[  +0.004141] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  390.87  Tue Aug 21 12:33:05 PDT 2018 (using threaded interrupts)
[  +0.001875] ACPI Error: Attempt to CreateField of length zero (20180810/dsopcode-134)
[  +0.000007] ACPI Error: Method parse/execution failed \_SB.WMIV.WVPI, AE_AML_OPERAND_VALUE (20180810/psparse-516)
[  +0.000009] ACPI Error: Method parse/execution failed \_SB.WMIV.WMPV, AE_AML_OPERAND_VALUE (20180810/psparse-516)
[  +4.332935] credkit-service (1827): Using fanotify permission checks may lead to deadlock; tainting kernel
[May15 10:40] IRQ 25: no longer affine to CPU11
[  +0.064093] IRQ 26: no longer affine to CPU6
[  +0.163807] IRQ 37: no longer affine to CPU9
[May30 17:02] tee (88566): /proc/88226/oom_adj is deprecated, please use /proc/88226/oom_score_adj instead.

==> kernel <==
 18:57:42 up 17 days,  8:22,  2 users,  load average: 0.44, 0.42, 0.56
Linux medya.sfo.corp.google.com 4.19.28-2rodete1-amd64 #1 SMP Debian 4.19.28-2rodete1 (2019-03-18 > 2018) x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:52:05+00:00 ==
INFO: Leader is medya.sfo.corp.google.com
INFO: == Kubernetes addon ensure completed at 2019-06-02T01:53:03+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:53:04+00:00 ==
INFO: Leader is medya.sfo.corp.google.com
INFO: == Kubernetes addon ensure completed at 2019-06-02T01:54:04+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:54:05+00:00 ==
INFO: Leader is medya.sfo.corp.google.com
INFO: == Kubernetes addon ensure completed at 2019-06-02T01:55:03+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:55:05+00:00 ==
INFO: Leader is medya.sfo.corp.google.com
INFO: == Kubernetes addon ensure completed at 2019-06-02T01:56:04+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:56:05+00:00 ==
INFO: Leader is medya.sfo.corp.google.com
INFO: == Kubernetes addon ensure completed at 2019-06-02T01:57:03+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-02T01:57:05+00:00 ==

==> kube-apiserver <==
I0602 01:45:54.566742       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0602 01:45:54.606919       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0602 01:45:54.646924       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0602 01:45:54.687153       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0602 01:45:54.726375       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0602 01:45:54.767114       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0602 01:45:54.806580       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0602 01:45:54.847061       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0602 01:45:54.886696       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0602 01:45:54.926537       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0602 01:45:54.967147       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0602 01:45:55.007245       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0602 01:45:55.046726       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0602 01:45:55.087239       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0602 01:45:55.133146       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0602 01:45:55.166927       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0602 01:45:55.206963       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0602 01:45:55.246782       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0602 01:45:55.286669       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0602 01:45:55.326872       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0602 01:45:55.366964       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0602 01:45:55.406977       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0602 01:45:55.446734       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0602 01:45:55.487023       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0602 01:45:55.516508       1 controller.go:606] quota admission added evaluator for: endpoints
I0602 01:45:55.526289       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0602 01:45:55.567106       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0602 01:45:55.605261       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0602 01:45:55.607284       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0602 01:45:55.646416       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0602 01:45:55.686560       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0602 01:45:55.727075       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0602 01:45:55.766688       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0602 01:45:55.806964       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0602 01:45:55.849170       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0602 01:45:55.885215       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0602 01:45:55.887022       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0602 01:45:55.927224       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0602 01:45:55.966848       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0602 01:45:56.006918       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0602 01:45:56.047312       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0602 01:45:56.087131       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0602 01:45:56.127024       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0602 01:45:56.214012       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.31.120.180]
I0602 01:45:56.736019       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0602 01:45:57.354514       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0602 01:45:57.692648       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0602 01:45:59.526437       1 controller.go:606] quota admission added evaluator for: namespaces
I0602 01:46:03.537116       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0602 01:46:03.584029       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-proxy <==
W0602 01:46:04.698819       1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0602 01:46:04.709511       1 server_others.go:146] Using iptables Proxier.
W0602 01:46:04.709598       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0602 01:46:04.709712       1 server.go:562] Version: v1.14.2
I0602 01:46:04.713351       1 conntrack.go:52] Setting nf_conntrack_max to 196608
I0602 01:46:04.713473       1 config.go:202] Starting service config controller
I0602 01:46:04.713490       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0602 01:46:04.713546       1 config.go:102] Starting endpoints config controller
I0602 01:46:04.713565       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0602 01:46:04.813596       1 controller_utils.go:1034] Caches are synced for service config controller
I0602 01:46:04.813646       1 controller_utils.go:1034] Caches are synced for endpoints config controller

==> kube-scheduler <==
I0602 01:45:50.243229       1 serving.go:319] Generated self-signed cert in-memory
W0602 01:45:50.579367       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0602 01:45:50.579380       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0602 01:45:50.579389       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0602 01:45:50.581026       1 server.go:142] Version: v1.14.2
I0602 01:45:50.581057       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0602 01:45:50.582035       1 authorization.go:47] Authorization is disabled
W0602 01:45:50.582050       1 authentication.go:55] Authentication is disabled
I0602 01:45:50.582061       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0602 01:45:50.582559       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0602 01:45:52.777739       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0602 01:45:52.777920       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0602 01:45:52.777977       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0602 01:45:52.778024       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0602 01:45:52.778065       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0602 01:45:52.780925       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0602 01:45:52.781024       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0602 01:45:52.781092       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0602 01:45:52.781238       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0602 01:45:52.788448       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0602 01:45:53.779110       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0602 01:45:53.780150       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0602 01:45:53.783820       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0602 01:45:53.785137       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0602 01:45:53.786254       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0602 01:45:53.787364       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0602 01:45:53.788371       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0602 01:45:53.789443       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0602 01:45:53.790623       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0602 01:45:53.791782       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0602 01:45:55.683928       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0602 01:45:55.784147       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0602 01:45:55.784277       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0602 01:45:55.790729       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2019-05-16 15:46:51 PDT, end at Sat 2019-06-01 18:57:42 PDT. --
Jun 01 18:53:03 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:03.815147   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:06 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:06.815001   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:16 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:16.814922   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:20 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:20.815002   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:29 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:29.815076   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:32 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:32.814979   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:41 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:41.815150   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:45 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:45.815120   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:48 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:48.814647   39697 dns.go:120] Search Line limits were exceeded, some search paths have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local corp.google.com prod.google.com prodz.google.com
Jun 01 18:53:56 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:56.814409   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:53:56 medya.sfo.corp.google.com kubelet[39697]: E0601 18:53:56.814409   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:08 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:08.814958   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:10 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:10.814952   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:22 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:22.814898   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:23 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:23.815115   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:36 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:36.815110   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:37 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:37.815423   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:47 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:47.815419   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:54:48 medya.sfo.corp.google.com kubelet[39697]: E0601 18:54:48.814956   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:00 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:00.814906   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:02 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:02.814954   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:13 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:13.814732   39697 dns.go:120] Search Line limits were exceeded, some search paths have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local corp.google.com prod.google.com prodz.google.com
Jun 01 18:55:13 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:13.815251   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:14 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:14.814996   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:25 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:25.815091   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:26 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:26.815011   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:38 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:38.814986   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:40 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:40.814512   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:51 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:51.815110   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:55:53 medya.sfo.corp.google.com kubelet[39697]: E0601 18:55:53.815179   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:05 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:05.815134   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:06 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:06.814549   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:15 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:15.814804   39697 dns.go:120] Search Line limits were exceeded, some search paths have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local corp.google.com prod.google.com prodz.google.com
Jun 01 18:56:18 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:18.815018   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:18 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:18.815201   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:31 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:31.814914   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:32 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:32.815021   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:43 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:43.815259   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:45 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:45.343533   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:48 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:48.056538   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:58 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:58.496752   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:56:59 medya.sfo.corp.google.com kubelet[39697]: E0601 18:56:59.815121   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:04 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:04.461140   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:12 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:12.814922   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:16 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:16.814930   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:24 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:24.814982   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:28 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:28.814972   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:36 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:36.814985   39697 pod_workers.go:190] Error syncing pod 2e4964cc-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-vxpzm_kube-system(2e4964cc-84d8-11e9-8147-40b0341a9cdc)"
Jun 01 18:57:38 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:38.814699   39697 dns.go:120] Search Line limits were exceeded, some search paths have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local corp.google.com prod.google.com prodz.google.com
Jun 01 18:57:40 medya.sfo.corp.google.com kubelet[39697]: E0601 18:57:40.814896   39697 pod_workers.go:190] Error syncing pod 2e4a63b0-84d8-11e9-8147-40b0341a9cdc ("coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-fb8b8dccf-kj2hh_kube-system(2e4a63b0-84d8-11e9-8147-40b0341a9cdc)"

==> kubernetes-dashboard <==
2019/06/02 01:46:05 Starting overwatch
2019/06/02 01:46:05 Using in-cluster config to connect to apiserver
2019/06/02 01:46:05 Using service account token for csrf signing
2019/06/02 01:46:05 Successful initial request to the apiserver, version: v1.14.2
2019/06/02 01:46:05 Generating JWE encryption key
2019/06/02 01:46:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/06/02 01:46:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/06/02 01:46:05 Storing encryption key in a secret
2019/06/02 01:46:05 Creating in-cluster Heapster client
2019/06/02 01:46:05 Serving insecurely on HTTP port: 9090
2019/06/02 01:46:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:46:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:47:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:47:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:48:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:48:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:49:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:49:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:50:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:50:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:51:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:51:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:52:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:52:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:53:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:53:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:54:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:54:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:55:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:55:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:56:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:56:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:57:05 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/02 01:57:35 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> storage-provisioner <==

@medyagh medyagh mentioned this issue Jun 2, 2019
@medyagh medyagh added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jun 4, 2019
medyagh pushed a commit to medyagh/minikube that referenced this issue Jun 4, 2019
making sure minikube is deleted before setup to avoid kubernetes#4132
medyagh added a commit to medyagh/minikube that referenced this issue Jun 4, 2019
making sure minikube is deleted before setup to avoid kubernetes#4132
@medyagh
Copy link
Member

medyagh commented Jun 4, 2019

@cduke-nokia good finding ! you are right we are skipping the none driver test in TestStartStop,
we need to add a check here for if is non driver not to send it to upstream libmachine's DetectProvisioner.

PRs are welcome :)

@medyagh medyagh added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 4, 2019
@medyagh medyagh added this to the v1.2.0 milestone Jun 4, 2019
@medyagh
Copy link
Member

medyagh commented Jun 11, 2019

update: I can not replicate this issue, even on 1.0.0
I tried with root user and also with sudo on debian 9.9

I also tried on the Ubuntu 16.04 LTS which this issue was filed under, and it works there too...

I have no idea how to reproduce this error anymore, even though I myself had this issue.

@cduke-nokia do you still have this issue ?

@cduke-nokia
Copy link
Author

Tested with minikube v.1.20:

$ sudo minikube version
minikube version: v1.2.0
$ sudo minikube start --vm-driver=none
$ sudo minikube stop
$ sudo minikube start --vm-driver=none

The same problem occurs: 'Waiting for SSH access ...' appears.
E0712 20:59:51.727143 31388 start.go:559] StartHost: detecting provisioner: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 18, 2019

Upstream docker-machine has this lovely hack, before calling DetectProvisioner:

	// TODO: Not really a fan of just checking "none" or "ci-test" here.
	if h.Driver.DriverName() == "none" || h.Driver.DriverName() == "ci-test" {
		return nil
	}

Like the OP says, the none driver has never supported Provisioner (nor SSH)

@zhengcan
Copy link

zhengcan commented Aug 9, 2019

I have the same issue in v1.3.0

@sdesbure
Copy link

Hello, same here in v1.3.1 on Debian 10.0

@biaoma-ty
Copy link

biaoma-ty commented Aug 21, 2019

hello, same here in v1.3.1 on Centos 7.2

@leson
Copy link

leson commented Sep 1, 2019

hello, same here in v1.3.1 on debian 9

@tstromberg tstromberg added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Sep 5, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Sep 5, 2019

Can anyone confirm whether sudo minikube delete works around this issue?

Also, if anyone can replicate this on v1.3.1, please share the output of:

sudo minikube start <your flags> --alsologtostderr -v=8

Thank you!

@LathaSrinivasan
Copy link

LathaSrinivasan commented Sep 6, 2019

@tstromberg - I tried sudo minikube delete and restarted minikube with the following commands:
minikube start --vm-driver=none --alsologtostderr -v=8

FYI, I am running minikube version 1.3.1 on CentOS 7

I still get this error about driver none not supporting ssh:

  • Waiting for the host to be provisioned ...
    I0906 20:10:03.296110 8419 cluster.go:155] configureHost: *host.Host &{ConfigVersion:3 Driver:0xc0002d9480 DriverName:none HostOptions:0xc0002d93c0 Name:minikube RawDriver:[123 10 32 32 32 32 32 32 32 32 34 73 80 65 100 100 114 101 115 115 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 77 97 99 104 105 110 101 78 97 109 101 34 58 32 34 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 85 115 101 114 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 80 111 114 116 34 58 32 48 44 10 32 32 32 32 32 32 32 32 34 83 83 72 75 101 121 80 97 116 104 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 116 111 114 101 80 97 116 104 34 58 32 34 47 114 111 111 116 47 46 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 77 97 115 116 101 114 34 58 32 102 97 108 115 101 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 72 111 115 116 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 68 105 115 99 111 118 101 114 121 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 85 82 76 34 58 32 34 34 10 32 32 32 32 125]}
    I0906 20:10:03.296187 8419 cluster.go:158] Detecting provisioner ...
    Waiting for SSH to be available...
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands
    Getting to WaitForSSH function...
    Error getting ssh command 'exit 0' : driver does not support ssh commands

@yangshenhuai
Copy link

hello, same in v1.3.1 on centos 7 , any workaround?

@OlivierPiron
Copy link

Hello, same issue in 1.3.1, centos 7.6 with a fresh install.
Docker version is 19.03.2

@OlivierPiron
Copy link

Output of minikube start (1.3.1) on centos 7 :

minikube start --vm-driver none --memory 3048 --cpus 3 --alsologtostderr --v=8
I0917 16:07:36.480096 5196 notify.go:124] Checking for updates...
I0917 16:07:36.618457 5196 start.go:224] hostinfo: {"hostname":"gvadevcont01.eri.local","uptime":13389772,"bootTime":1555339484,"procs":358,"os":"linux","platform":"oracle","platformFamily":"rhel","platformVersion":"7.6","kernelVersion":"4.1.12-124.21.1.el7uek.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"c1852642-d20c-1de4-2edb-ebf6a5a38d53"}
I0917 16:07:36.619010 5196 start.go:234] virtualization:
I0917 16:07:36.619564 5196 start.go:922] Saving config:
{
"MachineConfig": {
"KeepContext": false,
"MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.3.0.iso",
"Memory": 3048,
"CPUs": 3,
"DiskSize": 20000,
"VMDriver": "none",
"ContainerRuntime": "docker",
"HyperkitVpnKitSock": "",
"HyperkitVSockPorts": [],
"DockerEnv": [
"NO_PROXY=192.168.13.50"
],
"InsecureRegistry": null,
"RegistryMirror": null,
"HostOnlyCIDR": "192.168.99.1/24",
"HypervVirtualSwitch": "",
"KVMNetwork": "default",
"KVMQemuURI": "qemu:///system",
"KVMGPU": false,
"KVMHidden": false,
"DockerOpt": null,
"DisableDriverMounts": false,
"NFSShare": [],
"NFSSharesRoot": "/nfsshares",
"UUID": "",
"NoVTXCheck": false,
"DNSProxy": false,
"HostDNSResolver": true
},
"KubernetesConfig": {
"KubernetesVersion": "v1.15.2",
"NodeIP": "",
"NodePort": 8443,
"NodeName": "minikube",
"APIServerName": "minikubeCA",
"APIServerNames": null,
"APIServerIPs": null,
"DNSDomain": "cluster.local",
"ContainerRuntime": "docker",
"CRISocket": "",
"NetworkPlugin": "",
"FeatureGates": "",
"ServiceCIDR": "10.96.0.0/12",
"ImageRepository": "",
"ExtraOptions": null,
"ShouldLoadCachedImages": false,
"EnableDefaultCNI": false
}
}
I0917 16:07:36.619944 5196 cluster.go:98] Skipping create...Using existing machine configuration
I0917 16:07:36.620303 5196 none.go:257] checking for running kubelet ...
I0917 16:07:36.620319 5196 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet
I0917 16:07:36.624921 5196 none.go:129] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3
I0917 16:07:36.624942 5196 cluster.go:117] Machine state: Stopped
I0917 16:07:36.626315 5196 cluster.go:135] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[NO_PROXY=192.168.13.50] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:}
I0917 16:07:36.626367 5196 cluster.go:155] configureHost: *host.Host &{ConfigVersion:3 Driver:0xc000786f40 DriverName:none HostOptions:0xc000786e80 Name:minikube RawDriver:[123 10 32 32 32 32 32 32 32 32 34 73 80 65 100 100 114 101 115 115 34 58 32 34 49 57 50 46 49 54 56 46 49 51 46 53 48 34 44 10 32 32 32 32 32 32 32 32 34 77 97 99 104 105 110 101 78 97 109 101 34 58 32 34 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 85 115 101 114 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 80 111 114 116 34 58 32 48 44 10 32 32 32 32 32 32 32 32 34 83 83 72 75 101 121 80 97 116 104 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 116 111 114 101 80 97 116 104 34 58 32 34 47 114 111 111 116 47 46 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 77 97 115 116 101 114 34 58 32 102 97 108 115 101 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 72 111 115 116 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 68 105 115 99 111 118 101 114 121 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 85 82 76 34 58 32 34 116 99 112 58 47 47 49 57 50 46 49 54 56 46 49 51 46 53 48 58 50 51 55 54 34 10 32 32 32 32 125]}
I0917 16:07:36.626437 5196 cluster.go:158] Detecting provisioner ...
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands
Getting to WaitForSSH function...

@LathaSrinivasan
Copy link

Hi,
I worked around the issues by installing minikube to use the KVM driver on CentOS 7, as described here: https://www.unixarena.com/2019/05/how-to-deploy-kubernetes-minikube-on-rhel-centos.html/

Looks like "none" driver is still broken.

@OlivierPiron
Copy link

Hi,
I'm running in a virtualized environment, so I have to use the "none" driver.
Is there a workaround to be able to run minikube ?
Or do I have to revert to an older version ? which one, with which version of Docker ?

Thanks

@cduke-nokia
Copy link
Author

To OlivierPiron: this problem affects Minikube 1.0.0 and up; I did not encounter this problem in pre-1.0.0 versions. I can start minikube 1.0.0 to 1.3.1 with "none" driver but not after minikube stop. Workaround is to use minikube delete then minikube start will work.

In other words, this sequence fails:
sudo minikube start --vm-driver=none
sudo minikube stop
sudo minikube start --vm-driver=none

But this sequence works:
sudo minikube start --vm-driver=none
sudo minikube stop
sudo minikube delete
sudo minikube start --vm-driver=none

@tstromberg
Copy link
Contributor

tstromberg commented Sep 20, 2019

Can someone confirm whether minikube v1.4 also suffers this issue? v1.4 includes an updated machine-drivers version, which may help.

I wasn't able to replicate it locally in v1.4. This works on my Debian based machine:

sudo -E /usr/local/bin/minikube start --vm-driver=none && sudo -E /usr/local/bin/minikube stop && sudo -E /usr/local/bin/minikube start --vm-driver=none && sudo -E /usr/local/bin/minikube delete

@tstromberg tstromberg removed the r/2019q2 Issue was last reviewed 2019q2 label Sep 20, 2019
@adamh128
Copy link

adamh128 commented Oct 8, 2019

I'm seeing the same issue with 1.4.0 on Oracle Linux 7.6 (CentOS 7 base).
Also the work-around of using delete before start doesn't work for me :(

@genius24k
Copy link

On 1.5.0, RHEL 7.6, same issue and work around sequence also does not work.

@andrewjcouzens
Copy link

This is happening for me running Ubuntu 19.10 VM (virtualbox 6.0.8 r130520) with minikube 1.5.2 built from git repo.

$ minikube version
minikube version: v1.5.2
commit: b3cce694b22b0c3a16f38d6a0a2a8ca07a27a1e1

:~/src/go/minikube/out$ sudo ./minikube start --vm-driver=none --alsologtostderr -v=3
W1101 09:16:46.579283   23974 root.go:241] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
I1101 09:16:46.580762   23974 start.go:251] hostinfo: {"hostname":"acouzens-VirtualBox","uptime":19112,"bootTime":1572605894,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"19.10","kernelVersion":"5.3.0-19-generic","virtualizationSystem":"vbox","virtualizationRole":"guest","hostid":"a1010163-ea6c-4a86-90f4-ffcc9c06c1b2"}
I1101 09:16:46.581164   23974 start.go:261] virtualization: vbox guest
* minikube v1.5.2 on Ubuntu 19.10 (vbox/amd64)
I1101 09:16:46.581595   23974 start.go:547] selectDriver: flag="none", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 none docker  [] [] [] []  default qemu:///system false false <nil> [] false [] /nfsshares  false false true} {v1.16.2  8443 minikube minikubeCA [] [] cluster.local docker     [{kubelet resolv-conf /run/systemd/resolve/resolv.conf}] true false}}

snip!

I1101 09:16:46.582799   23974 cache_images.go:90] Successfully cached all images.
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1101 09:16:46.583044   23974 none.go:257] checking for running kubelet ...
I1101 09:16:46.583066   23974 exec_runner.go:42] (ExecRunner) Run:  systemctl is-active --quiet service kubelet
I1101 09:16:46.587993   23974 exec_runner.go:74] (ExecRunner) Non-zero exit: systemctl is-active --quiet service kubelet: exit status 3 (4.911538ms)
I1101 09:16:46.588028   23974 none.go:127] kubelet not running: check kubelet: command failed: systemctl is-active --quiet service kubelet
stdout:
stderr: : exit status 3
I1101 09:16:46.588037   23974 cluster.go:113] Machine state:  Stopped
* Starting existing none VM for "minikube" ...
I1101 09:16:46.588945   23974 cluster.go:131] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
* Waiting for the host to be provisioned ...
I1101 09:16:46.589057   23974 cluster.go:151] Detecting provisioner ...
I1101 09:16:46.589116   23974 main.go:110] libmachine: Waiting for SSH to be available...
I1101 09:16:46.589132   23974 main.go:110] libmachine: Getting to WaitForSSH function...
I1101 09:16:46.589146   23974 main.go:110] libmachine: Error getting ssh command 'exit 0' : driver does not support ssh commands
I1101 09:16:49.593746   23974 main.go:110] libmachine: Getting to WaitForSSH function...
I1101 09:16:49.593916   23974 main.go:110] libmachine: Error getting ssh command 'exit 0' : driver does not support ssh commands```

This goes on until it times out with the error

```🔄  Retriable failure: detecting provisioner: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded
I1101 09:27:19.654918   24106 none.go:257] checking for running kubelet ...

@krsfrodaslz
Copy link

This is happening for me running Ubuntu 19.10 VM (virtualbox 6.0.8 r130520) with minikube 1.5.2 built from git repo.

$ minikube version
minikube version: v1.5.2
commit: b3cce694b22b0c3a16f38d6a0a2a8ca07a27a1e1

:~/src/go/minikube/out$ sudo ./minikube start --vm-driver=none --alsologtostderr -v=3
W1101 09:16:46.579283   23974 root.go:241] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
I1101 09:16:46.580762   23974 start.go:251] hostinfo: {"hostname":"acouzens-VirtualBox","uptime":19112,"bootTime":1572605894,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"19.10","kernelVersion":"5.3.0-19-generic","virtualizationSystem":"vbox","virtualizationRole":"guest","hostid":"a1010163-ea6c-4a86-90f4-ffcc9c06c1b2"}
I1101 09:16:46.581164   23974 start.go:261] virtualization: vbox guest
* minikube v1.5.2 on Ubuntu 19.10 (vbox/amd64)
I1101 09:16:46.581595   23974 start.go:547] selectDriver: flag="none", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 none docker  [] [] [] []  default qemu:///system false false <nil> [] false [] /nfsshares  false false true} {v1.16.2  8443 minikube minikubeCA [] [] cluster.local docker     [{kubelet resolv-conf /run/systemd/resolve/resolv.conf}] true false}}

snip!

I1101 09:16:46.582799   23974 cache_images.go:90] Successfully cached all images.
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1101 09:16:46.583044   23974 none.go:257] checking for running kubelet ...
I1101 09:16:46.583066   23974 exec_runner.go:42] (ExecRunner) Run:  systemctl is-active --quiet service kubelet
I1101 09:16:46.587993   23974 exec_runner.go:74] (ExecRunner) Non-zero exit: systemctl is-active --quiet service kubelet: exit status 3 (4.911538ms)
I1101 09:16:46.588028   23974 none.go:127] kubelet not running: check kubelet: command failed: systemctl is-active --quiet service kubelet
stdout:
stderr: : exit status 3
I1101 09:16:46.588037   23974 cluster.go:113] Machine state:  Stopped
* Starting existing none VM for "minikube" ...
I1101 09:16:46.588945   23974 cluster.go:131] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
* Waiting for the host to be provisioned ...
I1101 09:16:46.589057   23974 cluster.go:151] Detecting provisioner ...
I1101 09:16:46.589116   23974 main.go:110] libmachine: Waiting for SSH to be available...
I1101 09:16:46.589132   23974 main.go:110] libmachine: Getting to WaitForSSH function...
I1101 09:16:46.589146   23974 main.go:110] libmachine: Error getting ssh command 'exit 0' : driver does not support ssh commands
I1101 09:16:49.593746   23974 main.go:110] libmachine: Getting to WaitForSSH function...
I1101 09:16:49.593916   23974 main.go:110] libmachine: Error getting ssh command 'exit 0' : driver does not support ssh commands```

This goes on until it times out with the error

```🔄  Retriable failure: detecting provisioner: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded
I1101 09:27:19.654918   24106 none.go:257] checking for running kubelet ...

Delete /root/.minikube/machines and try again.

@Spajderix
Copy link

Confirmed on Ubuntu 19.10 VM, minikube version 1.5.2

@SimonMal84
Copy link

Bug still exists in:
CentOS Linux release 7.7.1908
minikube v1.6.0-beta.1
docker 19.03.5

Start option for minikube are:
sudo minikube start --extra-config=kubelet.cgroup-driver=systemd --vm-driver=none --docker-env http_proxy=http://xxx:1111 --docker-env https_proxy=https://xxx:1111

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

this issue seems to be related to this
#4172

@SimonMal84
I will close this one so we can track the progress in that one.

@medyagh medyagh closed this as completed Dec 16, 2019
@tstromberg tstromberg reopened this Dec 18, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Dec 18, 2019

Re-opening because I don't see the relationship between cgroup and this issue.

It's very strange that the provisioner is even attempting to use SSH. Help wanted!

If you see this, you may get some mileage by using minikube delete.

@tstromberg tstromberg changed the title none: Unable to start VM: detecting provisioner: Too many retries waiting for SSH to be available none: reusing node: detecting provisioner: Too many retries waiting for SSH to be available Dec 18, 2019
@sobkowiak
Copy link

sobkowiak commented Mar 1, 2020

I have the same problem with minikube version v1.7.3. I didn't have it with 1.6.x

@rajeshkudaka
Copy link

I was facing the same issue in v1.3.1. I did a bit of code walkthrough in v.13.1 and found that it was having issue at "configureHost" which as per my understanding is only required when the setup is done in a vm and I did not observe any configurations specific to 'none' driver . I have tried a couple of scenarios and below mentioned fix worked for me. I did not face any issues in restarting multiple times or in usage of the cluster after using the below change.

Fix: https://github.com/kubernetes/minikube/compare/v1.3.1...rajeshkudaka:fix-4132?expand=1

I will create a PR if the change can still go to v1.3.1. Please let me know.
Thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.