Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot mount with minikube start command #13397

Closed
niklassemmler opened this issue Jan 19, 2022 · 7 comments
Closed

Cannot mount with minikube start command #13397

niklassemmler opened this issue Jan 19, 2022 · 7 comments
Labels
area/mount kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@niklassemmler
Copy link

niklassemmler commented Jan 19, 2022

What Happened?

I am trying to start a minikube cluster with a mounted directory. When I ssh into minikube the directory is empty

❯ mkdir /tmp/my-folder
❯ touch /tmp/my-folder/some-file
❯ minikube start --mount-string="/tmp/my-folder:/data" --mount
😄  minikube v1.24.0 on Darwin 12.1 (arm64)
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=6, Memory=6144MB) ...
🐳  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
❯ minikube ssh
docker@minikube:~$ ls /data

In contrast, when I mount the dir in a running minikube instance it works.

❯ minikube mount "/tmp/my-folder:/data" &
[4] 66045

~/Documents/code/minikube                                                                                                                                                                                             base 16:24:48
❯ 📁  Mounting host path /tmp/my-folder into VM as /data ...
    ▪ Mount type:
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)
    ▪ Options:      map[]
    ▪ Bind Address: 127.0.0.1:59362
🚀  Userspace file server: ufs starting
✅  Successfully mounted /tmp/my-folder to /data
❯
❯ minikube ssh
Last login: Wed Jan 19 15:24:36 2022 from 192.168.49.1
docker@minikube:~$ ls /data
some-file

Background information:

  • System: MacOSX with Apple M1 chip
  • Driver: Docker Desktop 4.4.2 (73305)
  • minikube version: v1.24.0 (commit: 76b94fb)

Attach the log file

* 
* ==> Audit <==
* |------------|--------------------------------------------------------------------|----------|---------------|---------|-------------------------------|-------------------------------|
|  Command   |                                Args                                | Profile  |     User      | Version |          Start Time           |           End Time            |
|------------|--------------------------------------------------------------------|----------|---------------|---------|-------------------------------|-------------------------------|
| delete     |                                                                    | minikube | theuser | v1.24.0 | Wed, 19 Jan 2022 16:19:16 CET | Wed, 19 Jan 2022 16:19:20 CET |
| start      | --mount-string=/tmp/my-folder:/data                                | minikube | theuser | v1.24.0 | Wed, 19 Jan 2022 16:23:28 CET | Wed, 19 Jan 2022 16:24:10 CET |
|            | --mount                                                            |          |               |         |                               |                               |
| ssh        |                                                                    | minikube | theuser | v1.24.0 | Wed, 19 Jan 2022 16:24:36 CET | Wed, 19 Jan 2022 16:24:42 CET |
|------------|--------------------------------------------------------------------|----------|---------------|---------|-------------------------------|-------------------------------|

* 
* ==> Last Start <==
* Log file created at: 2022/01/19 16:23:28
Running on machine: orinoco
Binary: Built with gc go1.17.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0119 16:23:28.606395   65882 out.go:297] Setting OutFile to fd 1 ...
I0119 16:23:28.606487   65882 out.go:349] isatty.IsTerminal(1) = true
I0119 16:23:28.606488   65882 out.go:310] Setting ErrFile to fd 2...
I0119 16:23:28.606491   65882 out.go:349] isatty.IsTerminal(2) = true
I0119 16:23:28.606543   65882 root.go:313] Updating PATH: /Users/theuser/.minikube/bin
I0119 16:23:28.607309   65882 out.go:304] Setting JSON to false
I0119 16:23:28.641169   65882 start.go:112] hostinfo: {"hostname":"orinoco.fritz.box","uptime":109203,"bootTime":1642496605,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.1","kernelVersion":"21.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"0420659a-c116-59c8-9408-b671966a12ed"}
W0119 16:23:28.641284   65882 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0119 16:23:28.660861   65882 out.go:176] 😄  minikube v1.24.0 on Darwin 12.1 (arm64)
W0119 16:23:28.660922   65882 preload.go:294] Failed to list preload files: open /Users/theuser/.minikube/cache/preloaded-tarball: no such file or directory
I0119 16:23:28.660967   65882 notify.go:174] Checking for updates...
I0119 16:23:28.661073   65882 driver.go:343] Setting default libvirt URI to qemu:///system
I0119 16:23:28.661100   65882 global.go:111] Querying for installed drivers using PATH=/Users/theuser/.minikube/bin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/opt/[email protected]/bin:/opt/homebrew/opt/util-linux/bin:/opt/homebrew/Caskroom/miniconda/base/bin:/opt/homebrew/Caskroom/miniconda/base/condabin:/Users/theuser/.local/share/zinit/polaris/bin:/opt/homebrew/opt/[email protected]/bin:/Users/theuser/.sdkman/candidates/java/current/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
I0119 16:23:28.661194   65882 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/}
I0119 16:23:28.661248   65882 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0119 16:23:28.661254   65882 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0119 16:23:28.661325   65882 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0119 16:23:28.661371   65882 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0119 16:23:28.661378   65882 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
I0119 16:23:28.849135   65882 docker.go:132] docker version: linux-20.10.12
I0119 16:23:28.849315   65882 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0119 16:23:29.141365   65882 info.go:263] docker info: {ID:Q33J:YKDP:QDCP:3GGR:KZ73:EDKG:3BU2:GHG7:JKUX:JWM6:FLFE:HN3U Containers:7 ContainersRunning:0 ContainersPaused:0 ContainersStopped:7 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:48 SystemTime:2022-01-19 15:23:28.877810545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6478667776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
I0119 16:23:29.141425   65882 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0119 16:23:29.141538   65882 global.go:119] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/}
I0119 16:23:29.141548   65882 driver.go:278] not recommending "ssh" due to default: false
I0119 16:23:29.141555   65882 driver.go:313] Picked: docker
I0119 16:23:29.141557   65882 driver.go:314] Alternatives: [ssh]
I0119 16:23:29.141562   65882 driver.go:315] Rejects: [podman virtualbox vmware vmwarefusion parallels hyperkit]
I0119 16:23:29.179922   65882 out.go:176] ✨  Automatically selected the docker driver
I0119 16:23:29.179951   65882 start.go:280] selected driver: docker
I0119 16:23:29.179955   65882 start.go:762] validating driver "docker" against <nil>
I0119 16:23:29.179978   65882 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0119 16:23:29.180186   65882 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0119 16:23:29.287647   65882 info.go:263] docker info: {ID:Q33J:YKDP:QDCP:3GGR:KZ73:EDKG:3BU2:GHG7:JKUX:JWM6:FLFE:HN3U Containers:7 ContainersRunning:0 ContainersPaused:0 ContainersStopped:7 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:48 SystemTime:2022-01-19 15:23:29.210088004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6478667776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:<nil>}}
I0119 16:23:29.287742   65882 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
W0119 16:23:29.287818   65882 info.go:50] Unable to get CPU info: no such file or directory
W0119 16:23:29.287837   65882 start.go:925] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I0119 16:23:29.287902   65882 start_flags.go:736] Wait components to verify : map[apiserver:true system_pods:true]
I0119 16:23:29.287910   65882 cni.go:93] Creating CNI manager for ""
I0119 16:23:29.287912   65882 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0119 16:23:29.287914   65882 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6144 CPUs:6 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/tmp/my-folder:/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/tmp/my-folder:/data}
I0119 16:23:29.337864   65882 out.go:176] 👍  Starting control plane node minikube in cluster minikube
I0119 16:23:29.337919   65882 cache.go:118] Beginning downloading kic base image for docker with docker
I0119 16:23:29.356256   65882 out.go:176] 🚜  Pulling base image ...
I0119 16:23:29.356283   65882 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0119 16:23:29.356374   65882 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I0119 16:23:29.404773   65882 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I0119 16:23:29.404799   65882 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
W0119 16:23:29.527923   65882 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-arm64.tar.lz4 status code: 404
I0119 16:23:29.528238   65882 cache.go:107] acquiring lock: {Name:mkf9be57d0f94ced3189a354229ece02bee8a3af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528283   65882 cache.go:107] acquiring lock: {Name:mke998425da5c06c771d9ddd4e8de47bc694e5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528315   65882 cache.go:107] acquiring lock: {Name:mk4da70a8949421bd1958af289a54b8926a139d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528435   65882 cache.go:107] acquiring lock: {Name:mk5393cf8db1ac3984049521bfe4372fbabd11ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528574   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 exists
I0119 16:23:29.528574   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 exists
I0119 16:23:29.528586   65882 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.3" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3" took 333.459µs
I0119 16:23:29.528586   65882 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.3" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3" took 333.375µs
I0119 16:23:29.528595   65882 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.22.3 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 succeeded
I0119 16:23:29.528596   65882 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.22.3 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 succeeded
I0119 16:23:29.528597   65882 cache.go:107] acquiring lock: {Name:mk0caeed5083b6da4fc27a42909df884e7531864 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528613   65882 cache.go:107] acquiring lock: {Name:mke777a258f5aed60d95ece2964f01caa2f07b40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528626   65882 cache.go:115] /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I0119 16:23:29.528642   65882 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 419.584µs
I0119 16:23:29.528641   65882 cache.go:107] acquiring lock: {Name:mkd30072f244d3eefa24821639ad0509602ad0c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528651   65882 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I0119 16:23:29.528670   65882 profile.go:147] Saving config to /Users/theuser/.minikube/profiles/minikube/config.json ...
I0119 16:23:29.528659   65882 cache.go:107] acquiring lock: {Name:mk0d89dce513c8920644772d2a79f0a0abc82f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528696   65882 cache.go:107] acquiring lock: {Name:mk9250e9618c48a001c3aba006b87ad40f581871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528728   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/config.json: {Name:mke94ccc6c0c672ca1e3b7b344f8e260b391e3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:23:29.528707   65882 cache.go:107] acquiring lock: {Name:mk3258cb1c6a7db006b2f9c6f188d3b7a036e99a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.528749   65882 cache.go:115] /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
I0119 16:23:29.528788   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 exists
I0119 16:23:29.528799   65882 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.3" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3" took 475.959µs
I0119 16:23:29.528813   65882 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.3 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 succeeded
I0119 16:23:29.528761   65882 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 152.041µs
I0119 16:23:29.528805   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists
I0119 16:23:29.528828   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 exists
I0119 16:23:29.528831   65882 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
I0119 16:23:29.528838   65882 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 239.667µs
I0119 16:23:29.528839   65882 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.3" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3" took 333.125µs
I0119 16:23:29.528844   65882 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded
I0119 16:23:29.528847   65882 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.22.3 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 succeeded
I0119 16:23:29.528853   65882 cache.go:115] /Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0119 16:23:29.528854   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists
I0119 16:23:29.528866   65882 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 353.25µs
I0119 16:23:29.528865   65882 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 218.75µs
I0119 16:23:29.528874   65882 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded
I0119 16:23:29.528880   65882 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0119 16:23:29.528883   65882 cache.go:115] /Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists
I0119 16:23:29.528891   65882 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 402.084µs
I0119 16:23:29.528896   65882 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded
I0119 16:23:29.528905   65882 cache.go:87] Successfully saved all images to host disk.
I0119 16:23:29.529129   65882 cache.go:206] Successfully downloaded all kic artifacts
I0119 16:23:29.529155   65882 start.go:313] acquiring machines lock for minikube: {Name:mk5e7030783bfb5ef5500a0728dbbdcb24c4d479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0119 16:23:29.529228   65882 start.go:317] acquired machines lock for "minikube" in 59.541µs
I0119 16:23:29.529261   65882 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6144 CPUs:6 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/tmp/my-folder:/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/tmp/my-folder:/data} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I0119 16:23:29.529353   65882 start.go:126] createHost starting for "" (driver="docker")
I0119 16:23:29.567061   65882 out.go:203] 🔥  Creating docker container (CPUs=6, Memory=6144MB) ...
I0119 16:23:29.567668   65882 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0119 16:23:29.567692   65882 client.go:168] LocalClient.Create starting
I0119 16:23:29.567866   65882 main.go:130] libmachine: Reading certificate data from /Users/theuser/.minikube/certs/ca.pem
I0119 16:23:29.568247   65882 main.go:130] libmachine: Decoding PEM data...
I0119 16:23:29.568272   65882 main.go:130] libmachine: Parsing certificate...
I0119 16:23:29.568387   65882 main.go:130] libmachine: Reading certificate data from /Users/theuser/.minikube/certs/cert.pem
I0119 16:23:29.568639   65882 main.go:130] libmachine: Decoding PEM data...
I0119 16:23:29.568656   65882 main.go:130] libmachine: Parsing certificate...
I0119 16:23:29.569634   65882 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0119 16:23:29.621500   65882 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0119 16:23:29.621659   65882 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs...
I0119 16:23:29.621680   65882 cli_runner.go:115] Run: docker network inspect minikube
W0119 16:23:29.653240   65882 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0119 16:23:29.653274   65882 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0119 16:23:29.653290   65882 network_create.go:259] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0119 16:23:29.653437   65882 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0119 16:23:29.679602   65882 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x14000eea058] misses:0}
I0119 16:23:29.679631   65882 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0119 16:23:29.679641   65882 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0119 16:23:29.679736   65882 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0119 16:23:29.745913   65882 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0119 16:23:29.745979   65882 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0119 16:23:29.746137   65882 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0119 16:23:29.774571   65882 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0119 16:23:29.799505   65882 oci.go:102] Successfully created a docker volume minikube
I0119 16:23:29.799689   65882 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I0119 16:23:30.374840   65882 oci.go:106] Successfully prepared a docker volume minikube
I0119 16:23:30.375106   65882 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0119 16:23:30.375108   65882 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0119 16:23:30.469509   65882 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=6144mb --memory-swap=6144mb --cpus=6 -e container=docker --expose 8443 --volume=/tmp/my-folder:/data --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
I0119 16:23:30.869634   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0119 16:23:30.899096   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:23:30.926635   65882 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0119 16:23:31.021637   65882 oci.go:281] the created container "minikube" has a running status.
I0119 16:23:31.021672   65882 kic.go:210] Creating ssh key for kic: /Users/theuser/.minikube/machines/minikube/id_rsa...
I0119 16:23:31.049064   65882 kic_runner.go:187] docker (temp): /Users/theuser/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0119 16:23:31.129254   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:23:31.158241   65882 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0119 16:23:31.158253   65882 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0119 16:23:31.274895   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:23:31.302640   65882 machine.go:88] provisioning docker machine ...
I0119 16:23:31.302692   65882 ubuntu.go:169] provisioning hostname "minikube"
I0119 16:23:31.302862   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:31.329317   65882 main.go:130] libmachine: Using SSH client type: native
I0119 16:23:31.329537   65882 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d1b940] 0x102d1e760 <nil>  [] 0s} 127.0.0.1 59326 <nil> <nil>}
I0119 16:23:31.329558   65882 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0119 16:23:31.454990   65882 main.go:130] libmachine: SSH cmd err, output: <nil>: minikube

I0119 16:23:31.455116   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:31.484500   65882 main.go:130] libmachine: Using SSH client type: native
I0119 16:23:31.484631   65882 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d1b940] 0x102d1e760 <nil>  [] 0s} 127.0.0.1 59326 <nil> <nil>}
I0119 16:23:31.484639   65882 main.go:130] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0119 16:23:31.594602   65882 main.go:130] libmachine: SSH cmd err, output: <nil>: 
I0119 16:23:31.594613   65882 ubuntu.go:175] set auth options {CertDir:/Users/theuser/.minikube CaCertPath:/Users/theuser/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/theuser/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/theuser/.minikube/machines/server.pem ServerKeyPath:/Users/theuser/.minikube/machines/server-key.pem ClientKeyPath:/Users/theuser/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/theuser/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/theuser/.minikube}
I0119 16:23:31.594637   65882 ubuntu.go:177] setting up certificates
I0119 16:23:31.594642   65882 provision.go:83] configureAuth start
I0119 16:23:31.594764   65882 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0119 16:23:31.621371   65882 provision.go:138] copyHostCerts
I0119 16:23:31.621522   65882 exec_runner.go:144] found /Users/theuser/.minikube/cert.pem, removing ...
I0119 16:23:31.621526   65882 exec_runner.go:207] rm: /Users/theuser/.minikube/cert.pem
I0119 16:23:31.638087   65882 exec_runner.go:151] cp: /Users/theuser/.minikube/certs/cert.pem --> /Users/theuser/.minikube/cert.pem (1139 bytes)
I0119 16:23:31.638322   65882 exec_runner.go:144] found /Users/theuser/.minikube/key.pem, removing ...
I0119 16:23:31.638324   65882 exec_runner.go:207] rm: /Users/theuser/.minikube/key.pem
I0119 16:23:31.638816   65882 exec_runner.go:151] cp: /Users/theuser/.minikube/certs/key.pem --> /Users/theuser/.minikube/key.pem (1679 bytes)
I0119 16:23:31.639143   65882 exec_runner.go:144] found /Users/theuser/.minikube/ca.pem, removing ...
I0119 16:23:31.639146   65882 exec_runner.go:207] rm: /Users/theuser/.minikube/ca.pem
I0119 16:23:31.639197   65882 exec_runner.go:151] cp: /Users/theuser/.minikube/certs/ca.pem --> /Users/theuser/.minikube/ca.pem (1099 bytes)
I0119 16:23:31.639699   65882 provision.go:112] generating server cert: /Users/theuser/.minikube/machines/server.pem ca-key=/Users/theuser/.minikube/certs/ca.pem private-key=/Users/theuser/.minikube/certs/ca-key.pem org=theuser.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0119 16:23:31.750799   65882 provision.go:172] copyRemoteCerts
I0119 16:23:31.751077   65882 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0119 16:23:31.751123   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:31.775114   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:23:31.858046   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1099 bytes)
I0119 16:23:31.872310   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0119 16:23:31.888230   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0119 16:23:31.903167   65882 provision.go:86] duration metric: configureAuth took 308.502417ms
I0119 16:23:31.903177   65882 ubuntu.go:193] setting minikube options for container-runtime
I0119 16:23:31.903335   65882 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I0119 16:23:31.903423   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:31.930751   65882 main.go:130] libmachine: Using SSH client type: native
I0119 16:23:31.930902   65882 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d1b940] 0x102d1e760 <nil>  [] 0s} 127.0.0.1 59326 <nil> <nil>}
I0119 16:23:31.930908   65882 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0119 16:23:32.042220   65882 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay

I0119 16:23:32.042228   65882 ubuntu.go:71] root file system type: overlay
I0119 16:23:32.042338   65882 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0119 16:23:32.042448   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:32.068599   65882 main.go:130] libmachine: Using SSH client type: native
I0119 16:23:32.068749   65882 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d1b940] 0x102d1e760 <nil>  [] 0s} 127.0.0.1 59326 <nil> <nil>}
I0119 16:23:32.068805   65882 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0119 16:23:32.192597   65882 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0119 16:23:32.192925   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:32.221830   65882 main.go:130] libmachine: Using SSH client type: native
I0119 16:23:32.221999   65882 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d1b940] 0x102d1e760 <nil>  [] 0s} 127.0.0.1 59326 <nil> <nil>}
I0119 16:23:32.222008   65882 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0119 16:23:32.743185   65882 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:53:13.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2022-01-19 15:23:32.189184005 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
 
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
 
 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500
 
 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0119 16:23:32.743208   65882 machine.go:91] provisioned docker machine in 1.440544917s
I0119 16:23:32.743217   65882 client.go:171] LocalClient.Create took 3.175500625s
I0119 16:23:32.743236   65882 start.go:168] duration metric: libmachine.API.Create for "minikube" took 3.175548916s
I0119 16:23:32.743243   65882 start.go:267] post-start starting for "minikube" (driver="docker")
I0119 16:23:32.743248   65882 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0119 16:23:32.743505   65882 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0119 16:23:32.743626   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:32.770866   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:23:32.851521   65882 ssh_runner.go:152] Run: cat /etc/os-release
I0119 16:23:32.854993   65882 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0119 16:23:32.855042   65882 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0119 16:23:32.855070   65882 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0119 16:23:32.855089   65882 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0119 16:23:32.855102   65882 filesync.go:126] Scanning /Users/theuser/.minikube/addons for local assets ...
I0119 16:23:32.855432   65882 filesync.go:126] Scanning /Users/theuser/.minikube/files for local assets ...
I0119 16:23:32.855558   65882 start.go:270] post-start completed in 112.3055ms
I0119 16:23:32.857431   65882 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0119 16:23:32.884719   65882 profile.go:147] Saving config to /Users/theuser/.minikube/profiles/minikube/config.json ...
I0119 16:23:32.885156   65882 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0119 16:23:32.885205   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:32.910923   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:23:32.995217   65882 start.go:129] duration metric: createHost completed in 3.465833584s
I0119 16:23:32.995226   65882 start.go:80] releasing machines lock for "minikube", held for 3.465971916s
I0119 16:23:32.995351   65882 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0119 16:23:33.021064   65882 ssh_runner.go:152] Run: systemctl --version
I0119 16:23:33.021126   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:33.021943   65882 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0119 16:23:33.022073   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:23:33.048788   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:23:33.050851   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:23:33.268105   65882 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I0119 16:23:33.277306   65882 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0119 16:23:33.288793   65882 cruntime.go:255] skipping containerd shutdown because we are bound to it
I0119 16:23:33.289011   65882 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0119 16:23:33.298437   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0119 16:23:33.309576   65882 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I0119 16:23:33.370225   65882 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I0119 16:23:33.426612   65882 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0119 16:23:33.436827   65882 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0119 16:23:33.491133   65882 ssh_runner.go:152] Run: sudo systemctl start docker
I0119 16:23:33.500338   65882 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0119 16:23:33.550381   65882 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0119 16:23:33.621998   65882 out.go:203] 🐳  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
I0119 16:23:33.623394   65882 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0119 16:23:33.744838   65882 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0119 16:23:33.746518   65882 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
I0119 16:23:33.750706   65882 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0119 16:23:33.759153   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0119 16:23:33.785875   65882 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0119 16:23:33.785982   65882 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0119 16:23:33.812015   65882 docker.go:558] Got preloaded images: 
I0119 16:23:33.812042   65882 docker.go:564] k8s.gcr.io/kube-apiserver:v1.22.3 wasn't preloaded
I0119 16:23:33.812045   65882 cache_images.go:83] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
I0119 16:23:33.823096   65882 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.4
I0119 16:23:33.824719   65882 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.3
I0119 16:23:33.825340   65882 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
I0119 16:23:33.826044   65882 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0119 16:23:33.827258   65882 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.3
I0119 16:23:33.828130   65882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0119 16:23:33.829428   65882 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.3
I0119 16:23:33.830738   65882 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.3
I0119 16:23:33.830967   65882 image.go:134] retrieving image: k8s.gcr.io/pause:3.5
I0119 16:23:33.832099   65882 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.0-0
I0119 16:23:33.843151   65882 image.go:180] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.4: Error response from daemon: reference does not exist
I0119 16:23:33.845843   65882 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
I0119 16:23:33.846037   65882 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.22.3: Error response from daemon: reference does not exist
I0119 16:23:33.846207   65882 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
I0119 16:23:33.846816   65882 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.22.3: Error response from daemon: reference does not exist
I0119 16:23:33.847702   65882 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
I0119 16:23:33.848350   65882 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.22.3: Error response from daemon: reference does not exist
I0119 16:23:33.849643   65882 image.go:180] daemon lookup for k8s.gcr.io/pause:3.5: Error response from daemon: reference does not exist
I0119 16:23:33.850546   65882 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.3: Error response from daemon: reference does not exist
I0119 16:23:33.851223   65882 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.5.0-0: Error response from daemon: reference does not exist
I0119 16:23:34.484671   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.22.3
I0119 16:23:34.549839   65882 cache_images.go:111] "k8s.gcr.io/kube-controller-manager:v1.22.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.22.3" does not exist at hash "42e51ba6db03efeaff32c77e1fc61a1e8a596f98343ca1882a8e1700dc263efc" in container runtime
I0119 16:23:34.549879   65882 docker.go:239] Removing image: k8s.gcr.io/kube-controller-manager:v1.22.3
I0119 16:23:34.550049   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.22.3
I0119 16:23:34.578579   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3
I0119 16:23:34.579777   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.3
I0119 16:23:34.584683   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-controller-manager_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.22.3: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.22.3': No such file or directory
I0119 16:23:34.584737   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 --> /var/lib/minikube/images/kube-controller-manager_v1.22.3 (27019776 bytes)
W0119 16:23:34.585728   65882 image.go:267] image k8s.gcr.io/coredns/coredns:v1.8.4 arch mismatch: want arm64 got amd64. fixing
I0119 16:23:34.585811   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.5
I0119 16:23:34.586003   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.4
I0119 16:23:34.589066   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.22.3
I0119 16:23:34.641205   65882 cache_images.go:111] "k8s.gcr.io/coredns/coredns:v1.8.4" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.4" does not exist at hash "008e44c427c6ff7a26f5a1a0ddebebfd3ea33231bd96f546e1381d1dc39d34a0" in container runtime
I0119 16:23:34.641263   65882 docker.go:239] Removing image: k8s.gcr.io/coredns/coredns:v1.8.4
I0119 16:23:34.641609   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/coredns/coredns:v1.8.4
I0119 16:23:34.641735   65882 cache_images.go:111] "k8s.gcr.io/pause:3.5" needs transfer: "k8s.gcr.io/pause:3.5" does not exist at hash "f7ff3c40426311c68450b0a2fce030935a625cef0e606ff2e6756870f552e760" in container runtime
I0119 16:23:34.641775   65882 docker.go:239] Removing image: k8s.gcr.io/pause:3.5
I0119 16:23:34.641951   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/pause:3.5
I0119 16:23:34.649122   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.22.3
I0119 16:23:34.696288   65882 cache_images.go:111] "k8s.gcr.io/kube-scheduler:v1.22.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.22.3" does not exist at hash "3893bb7d239347e1eec68a8f39501b676fc0a92b2c0101e415654bcd14a01eac" in container runtime
I0119 16:23:34.696326   65882 docker.go:239] Removing image: k8s.gcr.io/kube-scheduler:v1.22.3
I0119 16:23:34.696493   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.22.3
W0119 16:23:34.703550   65882 image.go:267] image k8s.gcr.io/etcd:3.5.0-0 arch mismatch: want arm64 got amd64. fixing
I0119 16:23:34.704093   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.5.0-0
I0119 16:23:34.704184   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.22.3
I0119 16:23:34.765170   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4
I0119 16:23:34.765513   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.4
I0119 16:23:34.790593   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5
I0119 16:23:34.790693   65882 cache_images.go:111] "k8s.gcr.io/kube-apiserver:v1.22.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.22.3" does not exist at hash "32513be2649f452b9ed3e4aeaf8b9968224077a5838bc4446afcd8ad74e51acf" in container runtime
I0119 16:23:34.790721   65882 docker.go:239] Removing image: k8s.gcr.io/kube-apiserver:v1.22.3
I0119 16:23:34.790863   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.22.3
I0119 16:23:34.790897   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.5
I0119 16:23:34.874812   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3
I0119 16:23:34.875140   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.3
I0119 16:23:35.000798   65882 cache_images.go:111] "k8s.gcr.io/etcd:3.5.0-0" needs transfer: "k8s.gcr.io/etcd:3.5.0-0" does not exist at hash "a2ee49d2d4320959e0894768b7ca97d69e03bc360d90b591538359abf2a91609" in container runtime
I0119 16:23:35.000833   65882 docker.go:239] Removing image: k8s.gcr.io/etcd:3.5.0-0
I0119 16:23:35.000949   65882 cache_images.go:111] "k8s.gcr.io/kube-proxy:v1.22.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.22.3" does not exist at hash "3a8d1d04758e2eada31b9acaeebe6e9a9dc60f5ac267183611639fc8e0e0e0aa" in container runtime
I0119 16:23:35.000969   65882 docker.go:239] Removing image: k8s.gcr.io/kube-proxy:v1.22.3
I0119 16:23:35.001006   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/etcd:3.5.0-0
I0119 16:23:35.001101   65882 ssh_runner.go:152] Run: docker rmi k8s.gcr.io/kube-proxy:v1.22.3
I0119 16:23:35.001147   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/coredns_v1.8.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.4: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.4': No such file or directory
I0119 16:23:35.001172   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 --> /var/lib/minikube/images/coredns_v1.8.4 (12264448 bytes)
I0119 16:23:35.001229   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/pause_3.5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.5: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.5': No such file or directory
I0119 16:23:35.001244   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5 --> /var/lib/minikube/images/pause_3.5 (252416 bytes)
I0119 16:23:35.001291   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3
I0119 16:23:35.001515   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.3
W0119 16:23:35.018968   65882 image.go:267] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I0119 16:23:35.019485   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0119 16:23:35.034531   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-scheduler_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.22.3: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.22.3': No such file or directory
I0119 16:23:35.034642   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 --> /var/lib/minikube/images/kube-scheduler_v1.22.3 (13499904 bytes)
I0119 16:23:35.161452   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-apiserver_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.22.3: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.22.3': No such file or directory
I0119 16:23:35.161520   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 --> /var/lib/minikube/images/kube-apiserver_v1.22.3 (28383744 bytes)
I0119 16:23:35.168170   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3
I0119 16:23:35.168200   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0
I0119 16:23:35.168775   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.3
I0119 16:23:35.168822   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.0-0
I0119 16:23:35.207247   65882 cache_images.go:111] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I0119 16:23:35.207283   65882 docker.go:239] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0119 16:23:35.207450   65882 ssh_runner.go:152] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0119 16:23:35.255403   65882 docker.go:206] Loading image: /var/lib/minikube/images/pause_3.5
I0119 16:23:35.255442   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.5 | docker load"
I0119 16:23:35.324923   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-proxy_v1.22.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.22.3: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.22.3': No such file or directory
I0119 16:23:35.325042   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 --> /var/lib/minikube/images/kube-proxy_v1.22.3 (34377728 bytes)
I0119 16:23:35.325118   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/etcd_3.5.0-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.0-0: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.0-0': No such file or directory
I0119 16:23:35.325143   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 --> /var/lib/minikube/images/etcd_3.5.0-0 (157800448 bytes)
I0119 16:23:35.381440   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
I0119 16:23:35.381794   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
W0119 16:23:35.554754   65882 image.go:267] image docker.io/kubernetesui/dashboard:v2.3.1 arch mismatch: want arm64 got amd64. fixing
I0119 16:23:35.555109   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1
W0119 16:23:35.557439   65882 image.go:267] image docker.io/kubernetesui/metrics-scraper:v1.0.7 arch mismatch: want arm64 got amd64. fixing
I0119 16:23:35.557638   65882 ssh_runner.go:152] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7
I0119 16:23:35.603367   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0119 16:23:35.603435   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I0119 16:23:35.623469   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/pause_3.5 from cache
I0119 16:23:35.775068   65882 cache_images.go:111] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "ea493a196fbd2426a92d57ad4e606d1efc11049d7e7bedf90b160d74d75308c2" in container runtime
I0119 16:23:35.775117   65882 docker.go:239] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0119 16:23:35.775731   65882 ssh_runner.go:152] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.7
I0119 16:23:35.788328   65882 cache_images.go:111] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "9fe3914f585c5ba68c0cbad7c16febea5a09caec8dbc1b0e22f2b17e613ed88a" in container runtime
I0119 16:23:35.788359   65882 docker.go:239] Removing image: docker.io/kubernetesui/dashboard:v2.3.1
I0119 16:23:35.788508   65882 ssh_runner.go:152] Run: docker rmi docker.io/kubernetesui/dashboard:v2.3.1
I0119 16:23:35.988546   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
I0119 16:23:35.988643   65882 cache_images.go:281] Loading image from: /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
I0119 16:23:35.988915   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1
I0119 16:23:35.988939   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7
I0119 16:23:36.135512   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory
I0119 16:23:36.135730   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (13969408 bytes)
I0119 16:23:36.135831   65882 ssh_runner.go:309] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory
I0119 16:23:36.135861   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (65396736 bytes)
I0119 16:23:37.452125   65882 docker.go:206] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.22.3
I0119 16:23:37.452154   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.22.3 | docker load"
I0119 16:23:40.408237   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.22.3 | docker load": (2.956038917s)
I0119 16:23:40.408262   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 from cache
I0119 16:23:40.408328   65882 docker.go:206] Loading image: /var/lib/minikube/images/coredns_v1.8.4
I0119 16:23:40.408343   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.4 | docker load"
I0119 16:23:41.748574   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.4 | docker load": (1.340201333s)
I0119 16:23:41.748591   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 from cache
I0119 16:23:41.748629   65882 docker.go:206] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0119 16:23:41.748637   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I0119 16:23:42.592484   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0119 16:23:42.592522   65882 docker.go:206] Loading image: /var/lib/minikube/images/kube-scheduler_v1.22.3
I0119 16:23:42.592536   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.22.3 | docker load"
I0119 16:23:43.829368   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.22.3 | docker load": (1.236803792s)
I0119 16:23:43.829385   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 from cache
I0119 16:23:43.829412   65882 docker.go:206] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7
I0119 16:23:43.829426   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load"
I0119 16:23:44.874060   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load": (1.044608792s)
I0119 16:23:44.874077   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache
I0119 16:23:44.874109   65882 docker.go:206] Loading image: /var/lib/minikube/images/kube-apiserver_v1.22.3
I0119 16:23:44.874117   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.22.3 | docker load"
I0119 16:23:47.340964   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.22.3 | docker load": (2.466813375s)
I0119 16:23:47.340981   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 from cache
I0119 16:23:47.341019   65882 docker.go:206] Loading image: /var/lib/minikube/images/kube-proxy_v1.22.3
I0119 16:23:47.341034   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.22.3 | docker load"
I0119 16:23:48.645993   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.22.3 | docker load": (1.304931958s)
I0119 16:23:48.646010   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 from cache
I0119 16:23:48.646044   65882 docker.go:206] Loading image: /var/lib/minikube/images/dashboard_v2.3.1
I0119 16:23:48.646058   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load"
I0119 16:23:50.704263   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load": (2.058173792s)
I0119 16:23:50.704279   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 from cache
I0119 16:23:50.704332   65882 docker.go:206] Loading image: /var/lib/minikube/images/etcd_3.5.0-0
I0119 16:23:50.704377   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.0-0 | docker load"
I0119 16:23:53.594466   65882 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.0-0 | docker load": (2.890049s)
I0119 16:23:53.602045   65882 cache_images.go:310] Transferred and loaded /Users/theuser/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 from cache
I0119 16:23:53.602133   65882 cache_images.go:118] Successfully loaded all cached images
I0119 16:23:53.602139   65882 cache_images.go:87] LoadImages completed in 19.789957458s
I0119 16:23:53.602359   65882 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
I0119 16:23:53.685846   65882 cni.go:93] Creating CNI manager for ""
I0119 16:23:53.685858   65882 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0119 16:23:53.686120   65882 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0119 16:23:53.686143   65882 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0119 16:23:53.686422   65882 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I0119 16:23:53.687812   65882 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0119 16:23:53.689502   65882 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
I0119 16:23:53.697088   65882 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.22.3: Process exited with status 2
stdout:

stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.22.3': No such file or directory

Initiating transfer...
I0119 16:23:53.697443   65882 ssh_runner.go:152] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.22.3
I0119 16:23:53.705078   65882 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubectl.sha256
I0119 16:23:53.705121   65882 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubelet.sha256
I0119 16:23:53.705298   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubectl
I0119 16:23:53.705298   65882 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0119 16:23:53.705456   65882 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.3/bin/linux/arm64/kubeadm.sha256
I0119 16:23:53.705616   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubeadm
I0119 16:23:53.710293   65882 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubeadm: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubeadm': No such file or directory
I0119 16:23:53.710313   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/linux/v1.22.3/kubeadm --> /var/lib/minikube/binaries/v1.22.3/kubeadm (42467328 bytes)
I0119 16:23:53.710382   65882 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubectl: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubectl': No such file or directory
I0119 16:23:53.710399   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/linux/v1.22.3/kubectl --> /var/lib/minikube/binaries/v1.22.3/kubectl (43450368 bytes)
I0119 16:23:53.726975   65882 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubelet
I0119 16:23:53.869333   65882 ssh_runner.go:309] existence check for /var/lib/minikube/binaries/v1.22.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.22.3/kubelet: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.22.3/kubelet': No such file or directory
I0119 16:23:53.869608   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/cache/linux/v1.22.3/kubelet --> /var/lib/minikube/binaries/v1.22.3/kubelet (112474152 bytes)
I0119 16:23:59.924624   65882 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0119 16:23:59.932781   65882 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0119 16:23:59.945597   65882 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0119 16:23:59.956809   65882 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0119 16:23:59.968341   65882 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0119 16:23:59.972239   65882 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0119 16:23:59.980259   65882 certs.go:54] Setting up /Users/theuser/.minikube/profiles/minikube for IP: 192.168.49.2
I0119 16:23:59.981124   65882 certs.go:182] skipping minikubeCA CA generation: /Users/theuser/.minikube/ca.key
I0119 16:23:59.981633   65882 certs.go:182] skipping proxyClientCA CA generation: /Users/theuser/.minikube/proxy-client-ca.key
I0119 16:23:59.981765   65882 certs.go:302] generating minikube-user signed cert: /Users/theuser/.minikube/profiles/minikube/client.key
I0119 16:23:59.981832   65882 crypto.go:68] Generating cert /Users/theuser/.minikube/profiles/minikube/client.crt with IP's: []
I0119 16:24:00.152295   65882 crypto.go:156] Writing cert to /Users/theuser/.minikube/profiles/minikube/client.crt ...
I0119 16:24:00.152305   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/client.crt: {Name:mk354830dbff9559c66d17a548be633f132160e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.152555   65882 crypto.go:164] Writing key to /Users/theuser/.minikube/profiles/minikube/client.key ...
I0119 16:24:00.152557   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/client.key: {Name:mk7c6d195a380c750fbd8c49c6cf5e5ea5db209c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.152657   65882 certs.go:302] generating minikube signed cert: /Users/theuser/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0119 16:24:00.152666   65882 crypto.go:68] Generating cert /Users/theuser/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0119 16:24:00.234158   65882 crypto.go:156] Writing cert to /Users/theuser/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0119 16:24:00.234167   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk1ce74cc54a6fe74f7d9733062418ebeae4c7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.234397   65882 crypto.go:164] Writing key to /Users/theuser/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0119 16:24:00.234399   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mka45862c0597e12ed5ac1865dba9bbc53b2437e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.234493   65882 certs.go:320] copying /Users/theuser/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/theuser/.minikube/profiles/minikube/apiserver.crt
I0119 16:24:00.234780   65882 certs.go:324] copying /Users/theuser/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/theuser/.minikube/profiles/minikube/apiserver.key
I0119 16:24:00.234887   65882 certs.go:302] generating aggregator signed cert: /Users/theuser/.minikube/profiles/minikube/proxy-client.key
I0119 16:24:00.234896   65882 crypto.go:68] Generating cert /Users/theuser/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0119 16:24:00.329329   65882 crypto.go:156] Writing cert to /Users/theuser/.minikube/profiles/minikube/proxy-client.crt ...
I0119 16:24:00.329335   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/proxy-client.crt: {Name:mk06ecc397b63945acbb226a610b7f885dc3e402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.329498   65882 crypto.go:164] Writing key to /Users/theuser/.minikube/profiles/minikube/proxy-client.key ...
I0119 16:24:00.329500   65882 lock.go:35] WriteFile acquiring /Users/theuser/.minikube/profiles/minikube/proxy-client.key: {Name:mk6fcd9f18d3bf93b9601853427528248b53a920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:00.330841   65882 certs.go:388] found cert: /Users/theuser/.minikube/certs/Users/theuser/.minikube/certs/ca-key.pem (1679 bytes)
I0119 16:24:00.331027   65882 certs.go:388] found cert: /Users/theuser/.minikube/certs/Users/theuser/.minikube/certs/ca.pem (1099 bytes)
I0119 16:24:00.331168   65882 certs.go:388] found cert: /Users/theuser/.minikube/certs/Users/theuser/.minikube/certs/cert.pem (1139 bytes)
I0119 16:24:00.331258   65882 certs.go:388] found cert: /Users/theuser/.minikube/certs/Users/theuser/.minikube/certs/key.pem (1679 bytes)
I0119 16:24:00.331848   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0119 16:24:00.363373   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0119 16:24:00.378615   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0119 16:24:00.394553   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0119 16:24:00.408180   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0119 16:24:00.423509   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0119 16:24:00.438722   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0119 16:24:00.454029   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0119 16:24:00.468835   65882 ssh_runner.go:319] scp /Users/theuser/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0119 16:24:00.484057   65882 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0119 16:24:00.495043   65882 ssh_runner.go:152] Run: openssl version
I0119 16:24:00.499937   65882 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0119 16:24:00.507335   65882 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0119 16:24:00.510791   65882 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  3 08:53 /usr/share/ca-certificates/minikubeCA.pem
I0119 16:24:00.510849   65882 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0119 16:24:00.515448   65882 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0119 16:24:00.522544   65882 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6144 CPUs:6 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/tmp/my-folder:/data] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/tmp/my-folder:/data}
I0119 16:24:00.522640   65882 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0119 16:24:00.553066   65882 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0119 16:24:00.560538   65882 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0119 16:24:00.568829   65882 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0119 16:24:00.569136   65882 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0119 16:24:00.576497   65882 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0119 16:24:00.576523   65882 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0119 16:24:01.048025   65882 out.go:203]     ▪ Generating certificates and keys ...
I0119 16:24:02.452491   65882 out.go:203]     ▪ Booting up control plane ...
I0119 16:24:08.495995   65882 out.go:203]     ▪ Configuring RBAC rules ...
I0119 16:24:08.873837   65882 cni.go:93] Creating CNI manager for ""
I0119 16:24:08.873852   65882 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0119 16:24:08.873884   65882 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0119 16:24:08.874390   65882 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_01_19T16_24_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0119 16:24:08.874390   65882 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0119 16:24:08.946473   65882 ops.go:34] apiserver oom_adj: -16
I0119 16:24:09.070752   65882 kubeadm.go:985] duration metric: took 196.866541ms to wait for elevateKubeSystemPrivileges.
I0119 16:24:09.070786   65882 kubeadm.go:392] StartCluster complete in 8.548194208s
I0119 16:24:09.070804   65882 settings.go:142] acquiring lock: {Name:mk51e69a1b138e8a080b7de78d000e12295691ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:09.070989   65882 settings.go:150] Updating kubeconfig:  /Users/theuser/.kube/config
I0119 16:24:09.073079   65882 lock.go:35] WriteFile acquiring /Users/theuser/.kube/config: {Name:mk6cc3a313fe196726d565bf116e4d67b77ed4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0119 16:24:09.595242   65882 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0119 16:24:09.595287   65882 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I0119 16:24:09.595299   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0119 16:24:09.595488   65882 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0119 16:24:09.613033   65882 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I0119 16:24:09.613056   65882 addons.go:153] Setting addon storage-provisioner=true in "minikube"
W0119 16:24:09.613067   65882 addons.go:165] addon storage-provisioner should already be in state true
I0119 16:24:09.596574   65882 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I0119 16:24:09.613112   65882 host.go:66] Checking if "minikube" exists ...
I0119 16:24:09.612931   65882 out.go:176] 🔎  Verifying Kubernetes components...
I0119 16:24:09.613449   65882 addons.go:65] Setting default-storageclass=true in profile "minikube"
I0119 16:24:09.613481   65882 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0119 16:24:09.613914   65882 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0119 16:24:09.614712   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:24:09.618069   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:24:09.644346   65882 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0119 16:24:09.644386   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0119 16:24:09.796900   65882 start.go:739] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
I0119 16:24:09.812066   65882 api_server.go:51] waiting for apiserver process to appear ...
I0119 16:24:09.826682   65882 out.go:176]     ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0119 16:24:09.826783   65882 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0119 16:24:09.826785   65882 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0119 16:24:09.826788   65882 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0119 16:24:09.826850   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:24:09.829865   65882 addons.go:153] Setting addon default-storageclass=true in "minikube"
W0119 16:24:09.829874   65882 addons.go:165] addon default-storageclass should already be in state true
I0119 16:24:09.829887   65882 host.go:66] Checking if "minikube" exists ...
I0119 16:24:09.830218   65882 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0119 16:24:09.841245   65882 api_server.go:71] duration metric: took 245.932541ms to wait for apiserver process to appear ...
I0119 16:24:09.841275   65882 api_server.go:87] waiting for apiserver healthz status ...
I0119 16:24:09.841280   65882 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59325/healthz ...
I0119 16:24:09.849866   65882 api_server.go:266] https://127.0.0.1:59325/healthz returned 200:
ok
I0119 16:24:09.852156   65882 api_server.go:140] control plane version: v1.22.3
I0119 16:24:09.852165   65882 api_server.go:130] duration metric: took 10.887292ms to wait for apiserver health ...
I0119 16:24:09.852169   65882 system_pods.go:43] waiting for kube-system pods to appear ...
I0119 16:24:09.858018   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:24:09.858060   65882 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0119 16:24:09.858066   65882 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0119 16:24:09.858160   65882 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0119 16:24:09.862511   65882 system_pods.go:59] 4 kube-system pods found
I0119 16:24:09.862531   65882 system_pods.go:61] "etcd-minikube" [023fbe42-33a8-43ad-b7a5-d4cfa29af2e9] Pending
I0119 16:24:09.862533   65882 system_pods.go:61] "kube-apiserver-minikube" [1cce40aa-726b-4df6-aa64-4469a6c474ba] Pending
I0119 16:24:09.862535   65882 system_pods.go:61] "kube-controller-manager-minikube" [6142f2d7-b2fc-4fcd-9e58-58e1d60c1a81] Pending
I0119 16:24:09.862537   65882 system_pods.go:61] "kube-scheduler-minikube" [3f6e0847-2448-45f1-84a5-8ba64033f053] Pending
I0119 16:24:09.862539   65882 system_pods.go:74] duration metric: took 10.368667ms to wait for pod list to return data ...
I0119 16:24:09.862546   65882 kubeadm.go:547] duration metric: took 267.237875ms to wait for : map[apiserver:true system_pods:true] ...
I0119 16:24:09.862553   65882 node_conditions.go:102] verifying NodePressure condition ...
I0119 16:24:09.866209   65882 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0119 16:24:09.866220   65882 node_conditions.go:123] node cpu capacity is 6
I0119 16:24:09.866226   65882 node_conditions.go:105] duration metric: took 3.671459ms to run NodePressure ...
I0119 16:24:09.866231   65882 start.go:234] waiting for startup goroutines ...
I0119 16:24:09.886655   65882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59326 SSHKeyPath:/Users/theuser/.minikube/machines/minikube/id_rsa Username:docker}
I0119 16:24:09.950113   65882 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0119 16:24:09.973648   65882 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0119 16:24:10.168285   65882 out.go:176] 🌟  Enabled addons: storage-provisioner, default-storageclass
I0119 16:24:10.168389   65882 addons.go:417] enableAddons completed in 572.89675ms
I0119 16:24:10.306036   65882 start.go:473] kubectl: 1.23.1, cluster: 1.22.3 (minor skew: 1)
I0119 16:24:10.325217   65882 out.go:176] 🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Operating System

macOS (Default)

Driver

Docker

@spowelljr
Copy link
Member

spowelljr commented Jan 19, 2022

Hi @metaswirl, we've had a few mount improvements since our last release. We have a new release coming out later today that will include the changes, but in the meantime feel free to try out the latest binary and let me know if that resolves your issue, thanks!

https://storage.googleapis.com/minikube-builds/master/minikube-darwin-arm64

@spowelljr spowelljr added area/mount kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jan 19, 2022
@niklassemmler
Copy link
Author

niklassemmler commented Jan 20, 2022

Hi @spowelljr,

thank you for your quick response. Unfortunately, the new binaries don't solve the problem on my system. I've deleted my existing minikube setup, killed all minikube processes and created different folders to avoid collision. The output is as follows:

❯ ./minikube-darwin-arm64 start --mount-string="/tmp/my-2nd-folder:/data2" --mount
😄  minikube v1.25.0 on Darwin 12.1 (arm64)
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=6, Memory=6144MB) ...
🤦  StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=6144mb --memory-swap=6144mb --cpus=6 -e container=docker --expose 8443 --volume=/tmp/my-2nd-folder:/data2 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: exit status 125
stdout:
c71405b93baee7ea70e6c50df8ff2104d836434a177d836ef9340ba36fa2102a

stderr:
docker: Error response from daemon: error while creating mount source path '/host_mnt/private/tmp/my-2nd-folder': mkdir /host_mnt: file exists.

📌  Noticed you have an activated docker-env on docker driver in this terminal:
❗  Please re-eval your docker-env, To ensure your environment variables have updated ports:

	'minikube -p minikube docker-env'


🤷  docker "minikube" container is missing, will recreate.
  Creating docker container (CPUs=6, Memory=6144MB) ...
😿  Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=6144mb --memory-swap=6144mb --cpus=6 -e container=docker --expose 8443 --volume=/tmp/my-2nd-folder:/data2 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: exit status 125
stdout:
9d036c23c133cb829c18fb9859f7b221003bc837d44d02e4143e0857b5b68d8f

stderr:
docker: Error response from daemon: error while creating mount source path '/host_mnt/private/tmp/my-2nd-folder': mkdir /host_mnt: file exists.


❌  Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=6144mb --memory-swap=6144mb --cpus=6 -e container=docker --expose 8443 --volume=/tmp/my-2nd-folder:/data2 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: exit status 125
stdout:
9d036c23c133cb829c18fb9859f7b221003bc837d44d02e4143e0857b5b68d8f

stderr:
docker: Error response from daemon: error while creating mount source path '/host_mnt/private/tmp/my-2nd-folder': mkdir /host_mnt: file exists.


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Creating a minikube setup without mount point works.

./minikube-darwin-arm64 start
😄  minikube v1.25.0 on Darwin 12.1 (arm64)
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=6, Memory=6144MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

I've pasted the logs here: https://pastebin.com/60gpiki6

@niklassemmler
Copy link
Author

Just saw that the new release 1.25 is already out. I have installed it via brew, but encountered the same problem as described in the my previous comment.

Mounting with minikube mount still works.

@spowelljr
Copy link
Member

Ah, sorry, I should have looked harder at your initial logs. So the difference between the minikube start --mount vs minikube mount is because you're using the Docker driver. So if you minikube start with the Docker driver and pass the --mount the mounting is handled by Docker, but when you run minikube mount it uses 9P to mount, which is a completely different mounting process.

So that explains why it works one way but not the other, so it has something to do with the Docker mount process. I've done a lot of work on the 9P mounting process but not any on the Docker method, so I'd have to try and find out how that process works before I could suggest any recommendations.

@spowelljr
Copy link
Member

I looked at the log file you put in pastebin, I'm not sure if that was a --mount run or not, but you seemed to get an error trying to just untar the preload which is odd.

W0120 15:24:23.809023   96328 cli_runner.go:180] docker run --rm --entrypoint /usr/bin/tar -v /Users/theuser/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I0120 15:24:23.809071   96328 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /Users/theuser/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:
 
stderr:
docker: Error response from daemon: error while creating mount source path '/host_mnt/Users/theuser/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4': mkdir /host_mnt: file exists.

@niklassemmler
Copy link
Author

Thanks @spowelljr, it is good to know that these are different thanks. I will use the 9P mount for now, as this works for me.

@afbjorklund
Copy link
Collaborator

Hiding the docker volumes under "mount" was probably a bad idea, it is very confusing to use the same word for both.

@klaases klaases closed this as completed Feb 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/mount kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants