Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to create extra disks on qemu2 vms #15887

Merged

Conversation

BlaineEXE
Copy link
Contributor

Add the ability to create and attach extra disks to qemu2 vms.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 18, 2023
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 18, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @BlaineEXE. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Feb 18, 2023
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for this contribution @BlaineEXE please check the lint and also plz put the output of minikube before/after this PR

@BlaineEXE BlaineEXE force-pushed the qemu2-add-extra-disk-capability branch from e10357e to 54201c1 Compare February 18, 2023 02:53
@BlaineEXE
Copy link
Contributor Author

Someone else may need to take up this PR. I have realized that my corporate Cisco VPN is preventing minikube with QEMU from pulling images, even with socket_vmnet.

@BlaineEXE
Copy link
Contributor Author

I was able to get it working using --network user.

Here is output from minikube v1.29.0

minikube start --driver qemu --network user --container-runtime containerd  --embed-certs --cpus 6 --memory "10gb" --extra-disks 3 --disk-size 20gb --insecure-registry "localhost:5000"
😄  minikube v1.29.0 on Darwin 13.2.1 (arm64)
    ▪ KUBECONFIG=/Users/blaine/.kube/config:/Users/blaine/development/openshift-aws/dev-cluster/auth/kubeconfig
✨  Using the qemu2 driver based on user configuration
❗  Specifying extra disks is currently only supported for the following drivers: [hyperkit kvm2]. If you can contribute to add this feature, please create a PR.
❗  You are using the QEMU driver without a dedicated network, which doesn't support `minikube service` & `minikube tunnel` commands.
To try the experimental dedicated network see: https://minikube.sigs.k8s.io/docs/drivers/qemu/#networking
💿  Downloading VM boot image ...
    > minikube-v1.29.0-arm64.iso....:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.29.0-arm64.iso:  323.04 MiB / 323.04 MiB  100.00% 40.45 MiB p
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.26.1 preload ...
    > preloaded-images-k8s-v18-v1...:  358.48 MiB / 358.48 MiB  100.00% 52.31 M
🔥  Creating qemu2 VM (CPUs=6, Memory=10240MB, Disk=20480MB) ...
📦  Preparing Kubernetes v1.26.1 on containerd 1.6.15 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@BlaineEXE
Copy link
Contributor Author

BlaineEXE commented Feb 20, 2023

And here is the output from my local-built version including changes in this PR

out/minikube-darwin-arm64 start --driver qemu --network user --container-runtime containerd  --embed-certs --cpus 6 --memory "10gb" --extra-disks 3 --disk-size 20gb --insecure-registry "localhost:5000" 
😄  minikube v1.29.0 on Darwin 13.2.1 (arm64)
    ▪ KUBECONFIG=/Users/blaine/.kube/config:/Users/blaine/development/openshift-aws/dev-cluster/auth/kubeconfig
✨  Using the qemu2 driver based on user configuration
❗  You are using the QEMU driver without a dedicated network, which doesn't support `minikube service` & `minikube tunnel` commands.
To try the dedicated network see: https://minikube.sigs.k8s.io/docs/drivers/qemu/#networking
💿  Downloading VM boot image ...
    > minikube-v1.29.0-1676568791...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.29.0-1676568791...:  328.99 MiB / 328.99 MiB  100.00% 47.03 M
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=6, Memory=10240MB, Disk=20480MB) ...
📦  Preparing Kubernetes v1.26.1 on containerd 1.6.15 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

And the 3 disks I added are vd[bcd]

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda    254:0    0  329M  1 disk 
vdb    254:16   0   20G  0 disk 
vdc    254:32   0   20G  0 disk 
vdd    254:48   0   20G  0 disk 
vde    254:64   0   20G  0 disk 
`-vde1 254:65   0   20G  0 part /var/lib/minishift
                                /var/lib/toolbox
                                /var/lib/minikube
                                /tmp/hostpath-provisioner
                                /tmp/hostpath_pv
                                /data
                                /var/lib/cni
                                /var/lib/kubelet
                                /var/tmp
                                /var/log
                                /var/lib/containers
                                /var/lib/buildkit
                                /var/lib/containerd
                                /var/lib/docker
                                /var/lib/boot2docker
                                /mnt/vde1

@BlaineEXE
Copy link
Contributor Author

@medyagh care to take another look?

I also tested this with Rook successfully. It has been the primary impetus for us to contribute the extra disks feature for hyperkit, kvm, and now qemu.

}
return nil

}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a modified copy of pkg/drivers/kvm/disks.go:createExtraDisk.

Why not refactor the code to a generic helper?

machineDir := filepath.Join(d.StorePath, "machines", d.GetMachineName())
diskFile := fmt.Sprintf("extra-disk-%d.raw", i)
return filepath.Join(machineDir, diskFile)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a helper for this pkg/drivers/common.go:ExtraDiskPath
using different name format ("%s-%d.rawdisk", d.GetMachineName(), diskID).
The helper is used by both kvm2 and hyperkit drivers.

Is there a reason to use diffrent name for the extra disk when using qemu2?
This can lead to confusing behavior - starting with kvm2 and then qemu2 will create 2 extra disks with different names, instead of reusing the existing disks.


if err := file.Truncate(util.ConvertMBToBytes(d.DiskSize)); err != nil {
return errors.Wrap(err, "truncate")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works but running "qemu-img create -f raw name size" is simpler and
does the right thing. For example is always allocates the first 4k bytes
to allow detection of logical block size.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that this comment and the one about making a generic helper for creating a raw disk (here) are mutually exclusive. I'd prefer to create a helper that doesn't rely on qemu tools. Ideally, generating all raw disks the same way will mean there is no behavior discrepancy between drivers.

@BlaineEXE
Copy link
Contributor Author

Nir raised good points. I'll get back to this in a handful of days.

@medyagh
Copy link
Member

medyagh commented Feb 22, 2023

@medyagh care to take another look?

I also tested this with Rook successfully. It has been the primary impetus for us to contribute the extra disks feature for hyperkit, kvm, and now qemu.

@BlaineEXE I like the comments @nirs made and I agree with them :)

@BlaineEXE BlaineEXE force-pushed the qemu2-add-extra-disk-capability branch from 54201c1 to c56678b Compare February 24, 2023 16:26
@BlaineEXE BlaineEXE requested review from medyagh and nirs and removed request for prezha and medyagh February 24, 2023 16:42
@BlaineEXE BlaineEXE force-pushed the qemu2-add-extra-disk-capability branch 2 times, most recently from 632ce9e to d7ec25d Compare February 25, 2023 01:31
@BlaineEXE BlaineEXE requested review from medyagh and nirs and removed request for nirs and medyagh February 25, 2023 01:32
@BlaineEXE
Copy link
Contributor Author

@medyagh, I got all the kinks worked out finally and ready for what I think should be final review. Thanks for taking a look :)

Copy link
Contributor

@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, see comments for possible improments later.

for i := 0; i < d.ExtraDisks; i++ {
// use a higher index for extra disks to reduce ID collision with current or future
// low-indexed devices (e.g., firmware, ISO CDROM, cloud config, and network device)
index := i + 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reduce ID collision" does not sound very promissing. Can we eliminate collisons or
avoid specificing the index, letting qemu handle this?

I don't remember that I had to specify index for drives when using multiple disks
but I usually use libvirt so maybe libvirt handles this for me.

If the intent is to be able to locate the drive later inside the guest,
it is better to specify the drive serial, which will be available in the
guest via the udev links (e.g. /dev/disk/by-id/virtio-{serial}).

It will also be better to use -device and -blockdev instead of -drive,
which I think is also required to set the serial (serial is set on
the device, not on the drive). I could not find any docs about converting
old style -drive options to -device and -blockdev. Proably the best
way to do this right is to check how libvirt does this.

Anyway I think this can be improved later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use bus=2,unit=<id> parameters to use a different bus entirely, but those also collided with other devices like the CDROM drive in my local testing. This seemed like a simple (if fairly blunt) way of preventing that collision for other users and avoiding corner cases as best as possible if the cloud init drive or other options change in the future.

Copy link
Member

@medyagh medyagh May 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think this is not the most robust way of solving this issue, my main concern is if a minikube cluster is created and user deletes the minikube config folder without properly deleting minikube...then in the next minikube run this would colide again?

could we ensure that minikube delete --all deletes the abandoned disks ? simmilar to the orpahned disks in docker driver we have a clean up mechanim for them

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you try creating two clusters with extra disk and one without extra disk and see if there is a collision with the extra disks? And after deleting ensure that there are no disks left over

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still confused about how the config folder being deleted could result in problems. I'll go through the behavior I am seeing from minikube, and you can let me know if I'm missing what "config" folder you are talking about.

I don't have anything in my config folder other than an empty config.json:

❯ cat ~/.minikube/config/config.json
{}

I create minikube clusters from CLI only; example:

out/minikube-darwin-arm64 -p minikube2 start --driver qemu --extra-disks 3

I have 3 minikube environments using -p. The first 2 have 3 extra disks each, and the last has no extra disks.

❯ minikube -p minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

❯ minikube -p minikube2 status
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

❯ minikube -p minikube3 status
minikube3
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

The ~/.minikube/machines dir has separate disks for each machine profile. The

 ❯ tree -h ~/.minikube/machines/
[ 224]  /Users/blaine/.minikube/machines/
├── [ 416]  minikube
│   ├── [328M]  boot2docker.iso
│   ├── [3.2K]  config.json
│   ├── [827M]  disk.qcow2
│   ├── [4.5K]  disk.qcow2.raw
│   ├── [1.6K]  id_rsa
│   ├── [ 381]  id_rsa.pub
│   ├── [ 20G]  minikube-0.rawdisk
│   ├── [ 20G]  minikube-1.rawdisk
│   ├── [ 20G]  minikube-2.rawdisk
│   ├── [   0]  monitor
│   └── [   6]  qemu.pid
├── [ 416]  minikube2
│   ├── [328M]  boot2docker.iso
│   ├── [3.2K]  config.json
│   ├── [804M]  disk.qcow2
│   ├── [4.5K]  disk.qcow2.raw
│   ├── [1.6K]  id_rsa
│   ├── [ 381]  id_rsa.pub
│   ├── [ 20G]  minikube2-0.rawdisk
│   ├── [ 20G]  minikube2-1.rawdisk
│   ├── [ 20G]  minikube2-2.rawdisk
│   ├── [   0]  monitor
│   └── [   6]  qemu.pid
├── [ 320]  minikube3
│   ├── [328M]  boot2docker.iso
│   ├── [3.2K]  config.json
│   ├── [ 11M]  disk.qcow2
│   ├── [4.5K]  disk.qcow2.raw
│   ├── [1.6K]  id_rsa
│   ├── [ 381]  id_rsa.pub
│   ├── [   0]  monitor
│   └── [   6]  qemu.pid
├── [1.6K]  server-key.pem
└── [1.2K]  server.pem

4 directories, 32 files

As an example, the machine config for profile minikube2, located in the minikube2 subdir, looks like this:

❯ cat ~/.minikube/machines/minikube2/config.json
{
    "ConfigVersion": 3,
    "Driver": {
        "IPAddress": "192.168.105.13",
        "MachineName": "minikube2",
        "SSHUser": "docker",
        "SSHPort": 22,
        "SSHKeyPath": "",
        "StorePath": "/Users/blaine/.minikube",
        "SwarmMaster": false,
        "SwarmHost": "",
        "SwarmDiscovery": "",
        "EnginePort": 2376,
        "FirstQuery": true,
        "Memory": 6000,
        "DiskSize": 20000,
        "CPU": 2,
        "Program": "qemu-system-aarch64",
        "BIOS": false,
        "CPUType": "host",
        "MachineType": "virt",
        "Firmware": "/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd",
        "Display": false,
        "DisplayType": "",
        "Nographic": false,
        "VirtioDrives": false,
        "Network": "socket_vmnet",
        "PrivateNetwork": "",
        "Boot2DockerURL": "file:///Users/blaine/.minikube/cache/iso/arm64/minikube-v1.30.1-1685960108-16634-arm64.iso",
        "CaCertPath": "",
        "PrivateKeyPath": "",
        "DiskPath": "/Users/blaine/.minikube/machines/minikube2/minikube2.img",
        "CacheMode": "default",
        "IOMode": "threads",
        "UserDataFile": "",
        "CloudConfigRoot": "",
        "LocalPorts": "",
        "MACAddress": "4a:7c:ba:dc:1a:ea",
        "SocketVMNetPath": "/opt/homebrew/var/run/socket_vmnet",
        "SocketVMNetClientPath": "/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client",
        "ExtraDisks": 3      #### <--- extra disks 
    },
    "DriverName": "qemu2",
    "HostOptions": {
        "Driver": "",
        "Memory": 0,
        "Disk": 0,
        "EngineOptions": {
            "ArbitraryFlags": null,
            "Dns": null,
            "GraphDir": "",
            "Env": null,
            "Ipv6": false,
            "InsecureRegistry": [
                "10.96.0.0/12"
            ],
            "Labels": null,
            "LogLevel": "",
            "StorageDriver": "",
            "SelinuxEnabled": false,
            "TlsVerify": false,
            "RegistryMirror": [],
            "InstallURL": "https://get.docker.com"
        },
        "SwarmOptions": {
            "IsSwarm": false,
            "Address": "",
            "Discovery": "",
            "Agent": false,
            "Master": false,
            "Host": "",
            "Image": "",
            "Strategy": "",
            "Heartbeat": 0,
            "Overcommit": 0,
            "ArbitraryFlags": null,
            "ArbitraryJoinFlags": null,
            "Env": null,
            "IsExperimental": false
        },
        "AuthOptions": {
            "CertDir": "/Users/blaine/.minikube",
            "CaCertPath": "/Users/blaine/.minikube/certs/ca.pem",
            "CaPrivateKeyPath": "/Users/blaine/.minikube/certs/ca-key.pem",
            "CaCertRemotePath": "",
            "ServerCertPath": "/Users/blaine/.minikube/machines/server.pem",
            "ServerKeyPath": "/Users/blaine/.minikube/machines/server-key.pem",
            "ClientKeyPath": "/Users/blaine/.minikube/certs/key.pem",
            "ServerCertRemotePath": "",
            "ServerKeyRemotePath": "",
            "ClientCertPath": "/Users/blaine/.minikube/certs/cert.pem",
            "ServerCertSANs": null,
            "StorePath": "/Users/blaine/.minikube"
        }
    },
    "Name": "minikube2"
}

And the qemu processes that are running are using the correct disks for all 3 vms

❯ ps aux | grep qemu
blaine           82748  36.0  2.7 415963040 913104   ??  R     4:35PM   5:21.61 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a2:0b:5f:76:3c -netdev socket,id=net0,fd=3 -daemonize -drive file=/Users/blaine/.minikube/machines/minikube/minikube-0.rawdisk,index=10,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube/minikube-1.rawdisk,index=11,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube/minikube-2.rawdisk,index=12,media=disk,format=raw,if=virtio /Users/blaine/.minikube/machines/minikube/disk.qcow2

blaine           84109  43.3  4.8 415686416 1620032   ??  R     4:44PM   0:57.26 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube2/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube2/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube2/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:7c:ba:dc:1a:ea -netdev socket,id=net0,fd=3 -daemonize -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-0.rawdisk,index=10,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-1.rawdisk,index=11,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-2.rawdisk,index=12,media=disk,format=raw,if=virtio /Users/blaine/.minikube/machines/minikube2/disk.qcow2

blaine           84626   2.0  5.4 415555568 1803312   ??  S     4:48PM   0:12.60 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube3/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube3/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube3/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:db:51:c6:b9:1d -netdev socket,id=net0,fd=3 -daemonize /Users/blaine/.minikube/machines/minikube3/disk.qcow2

If I delete the first minikube cluster, all disks are removed:

❯ out/minikube-darwin-arm64 -p minikube delete                 
🔥  Deleting "minikube" in qemu2 ...
💀  Removed all traces of the "minikube" cluster.

❯ tree -h ~/.minikube/machines                
[ 192]  /Users/blaine/.minikube/machines
├── [ 416]  minikube2
│   ├── [328M]  boot2docker.iso
│   ├── [3.2K]  config.json
│   ├── [810M]  disk.qcow2
│   ├── [4.5K]  disk.qcow2.raw
│   ├── [1.6K]  id_rsa
│   ├── [ 381]  id_rsa.pub
│   ├── [ 20G]  minikube2-0.rawdisk
│   ├── [ 20G]  minikube2-1.rawdisk
│   ├── [ 20G]  minikube2-2.rawdisk
│   ├── [   0]  monitor
│   └── [   6]  qemu.pid
├── [ 320]  minikube3
│   ├── [328M]  boot2docker.iso
│   ├── [3.2K]  config.json
│   ├── [813M]  disk.qcow2
│   ├── [4.5K]  disk.qcow2.raw
│   ├── [1.6K]  id_rsa
│   ├── [ 381]  id_rsa.pub
│   ├── [   0]  monitor
│   └── [   6]  qemu.pid
├── [1.6K]  server-key.pem
└── [1.2K]  server.pem

3 directories, 21 files

I can still ssh to minikube2, lsblk shows vd[b-d] are the extra disks, and partprobe reads the disk successfully.

❯ minikube -p minikube2 ssh                                                 ✘ INT
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ hostname
minikube2
$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
vda    254:0    0 327.8M  1 disk
vdb    254:16   0  19.5G  0 disk
vdc    254:32   0  19.5G  0 disk
vdd    254:48   0  19.5G  0 disk
vde    254:64   0  19.5G  0 disk
`-vde1 254:65   0  19.5G  0 part /var/lib/minishift
                                 /var/lib/toolbox
                                 /var/lib/minikube
                                 /tmp/hostpath-provisioner
                                 /tmp/hostpath_pv
                                 /data
                                 /var/lib/cni
                                 /var/lib/kubelet
                                 /var/tmp
                                 /var/log
                                 /var/lib/containers
                                 /var/lib/buildkit
                                 /var/lib/containerd
                                 /var/lib/docker
                                 /var/lib/boot2docker
                                 /mnt/vde1
$ sudo partprobe /dev/vdb

minikube delete --all deletes the remaining VMs

❯ out/minikube-darwin-arm64 delete --all      
🔥  Deleting "minikube2" in qemu2 ...
💀  Removed all traces of the "minikube2" cluster.
🔥  Deleting "minikube3" in qemu2 ...
💀  Removed all traces of the "minikube3" cluster.
🔥  Successfully deleted all profiles

❯ tree -h ~/.minikube/machines          
[ 128]  /Users/blaine/.minikube/machines
├── [1.6K]  server-key.pem
└── [1.2K]  server.pem

1 directory, 2 files

minikube delete --all --purge deletes the whole ~/.minikube dir.

Does this sufficiently show that disks are handled correctly in the case of multiple differently-configured clusters?

@BlaineEXE
Copy link
Contributor Author

Hi @medyagh. Do you have a few minutes to give what I hope will be a final once-over on this PR?

@BlaineEXE
Copy link
Contributor Author

Hi @medyagh. I got busy with other things for a while and thought I'd check back in on this

@BlaineEXE
Copy link
Contributor Author

/cc @tstromberg @spowelljr

Is there anyone who can take an updated look at this PR? It's going on 2 months waiting on final approval.

@medyagh
Copy link
Member

medyagh commented May 11, 2023

@BlaineEXE thank you for the patience on the waiting, do you mind trying it with the socket_vmnet network as well ?
and put it in the PR Description Before After (for both network user and network socket_vmnet

and if it does not work for one of them we should make sure the user is warned that it only works with one network driver

@BlaineEXE
Copy link
Contributor Author

BlaineEXE commented May 11, 2023

I tested it. Extra disks are added when network is socket_vmnet, but socket_vmnet still doesn't give me a working Minikube on my system due to DNS issues. I have a corporate VPN I can't disable.

@BlaineEXE
Copy link
Contributor Author

Actually, it looks like I was finally able to get socket_vmnet to work, at least this once -- and with a 2-node cluster! I'm not sure what magic made it work. lsblk in the nodes shows all 3 extra disks on both nodes for me.

😄  minikube v1.30.1 on Darwin 13.3.1 (arm64)
    ▪ MINIKUBE_EXTRA_DISKS=3
    ▪ MINIKUBE_CPUS=6
    ▪ MINIKUBE_NODES=2
    ▪ MINIKUBE_MEMORY=10gb
    ▪ KUBECONFIG=/Users/blaine/.kube/config:/Users/blaine/development/openshift-aws/kubeconfig
✨  Using the qemu2 driver based on user configuration
🌐  Automatically selected the socket_vmnet network
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.27.1 preload ...
    > preloaded-images-k8s-v18-v1...:  357.57 MiB / 357.57 MiB  100.00% 40.58 M
🔥  Creating qemu2 VM (CPUs=6, Memory=10240MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://registry.k8s.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦  Preparing Kubernetes v1.27.1 on containerd 1.7.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🔎  Verifying Kubernetes components...

👍  Starting worker node minikube-m02 in cluster minikube
🔥  Creating qemu2 VM (CPUs=6, Memory=10240MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.105.12
❗  This VM is having trouble accessing https://registry.k8s.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦  Preparing Kubernetes v1.27.1 on containerd 1.7.0 ...
    ▪ env NO_PROXY=192.168.105.12
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

for i := 0; i < d.ExtraDisks; i++ {
// use a higher index for extra disks to reduce ID collision with current or future
// low-indexed devices (e.g., firmware, ISO CDROM, cloud config, and network device)
index := i + 10
Copy link
Member

@medyagh medyagh May 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think this is not the most robust way of solving this issue, my main concern is if a minikube cluster is created and user deletes the minikube config folder without properly deleting minikube...then in the next minikube run this would colide again?

could we ensure that minikube delete --all deletes the abandoned disks ? simmilar to the orpahned disks in docker driver we have a clean up mechanim for them

@BlaineEXE
Copy link
Contributor Author

BlaineEXE commented May 17, 2023

@medyagh that seems like a good concern. I'm not sure exactly what config folder you are asking about. The one I'm most familiar with is the default $HOME/.minikube one. I checked, and the additional disks (there are 3 on this node I have running now) are created in the $HOME/.minikube/machines/<mach.name> directory. If the concern is what happens if the user purges the ~/.minikube dir, the extra disks will be removed as well. Obviously let me know if there are complexities I'm missing.

❯ ls ~/.minikube/machines/minikube 
boot2docker.iso    disk.qcow2         id_rsa             minikube-0.rawdisk minikube-2.rawdisk qemu.pid
config.json        disk.qcow2.raw     id_rsa.pub         minikube-1.rawdisk monitor

@medyagh
Copy link
Member

medyagh commented Jun 1, 2023

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 1, 2023
@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

pkg/drivers/common.go Outdated Show resolved Hide resolved
Add the ability to create and attach extra disks to qemu2 vms.

Signed-off-by: Blaine Gardner <[email protected]>
@BlaineEXE BlaineEXE force-pushed the qemu2-add-extra-disk-capability branch from f2e4be4 to 12c4bf5 Compare June 6, 2023 22:28
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jun 6, 2023
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15887) |
+----------------+----------+---------------------+
| minikube start | 51.2s    | 51.4s               |
| enable ingress | 26.7s    | 27.0s               |
+----------------+----------+---------------------+

Times for minikube start: 52.2s 50.3s 52.4s 49.4s 51.7s
Times for minikube (PR 15887) start: 51.4s 53.4s 48.7s 52.5s 51.2s

Times for minikube ingress: 27.1s 27.2s 27.6s 27.2s 24.7s
Times for minikube (PR 15887) ingress: 27.1s 24.7s 27.1s 28.1s 28.2s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15887) |
+----------------+----------+---------------------+
| minikube start | 25.2s    | 23.8s               |
| enable ingress | 20.9s    | 21.2s               |
+----------------+----------+---------------------+

Times for minikube start: 25.0s 25.4s 25.2s 25.3s 25.2s
Times for minikube (PR 15887) start: 21.9s 22.5s 24.4s 24.4s 25.9s

Times for minikube ingress: 20.9s 20.3s 21.9s 20.4s 20.9s
Times for minikube (PR 15887) ingress: 20.9s 19.9s 20.9s 21.9s 22.4s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15887) |
+----------------+----------+---------------------+
| minikube start | 22.8s    | 23.1s               |
| enable ingress | 34.4s    | 30.9s               |
+----------------+----------+---------------------+

Times for minikube start: 22.7s 20.4s 23.8s 23.9s 23.0s
Times for minikube (PR 15887) start: 23.4s 23.7s 23.8s 23.5s 21.2s

Times for minikube ingress: 31.3s 47.3s 31.4s 30.4s 31.3s
Times for minikube (PR 15887) ingress: 30.4s 30.3s 31.4s 31.3s 31.3s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
QEMU_macOS TestMountStart/serial/VerifyMountPostDelete (gopogh) n/a
KVM_Linux TestFunctional/parallel/DashboardCmd (gopogh) 1.36 (chart)
QEMU_macOS TestMinikubeProfile (gopogh) 2.99 (chart)

To see the flake rates of all tests by environment, click here.

@BlaineEXE
Copy link
Contributor Author

From an offline discussion:

For the extra disks PR, by only concern is the duplicate indexs

file=/Users/blaine/.minikube/machines/minikube/minikube-1.rawdisk,index=11
file=/Users/blaine/.minikube/machines/minikube2/minikube2-1.rawdisk,index=11

That doesn't cause any issues? (edited)

No. The indexes are separate per qemu ... process (VM instance) and not global.

I also just verified there is no disk overlap. I created 2 clusters with extra disks and wrote random data to the first 2 sectors of /dev/vdb on minikube. Then I verified that the extra disk on minikube2 didn't have any data present -- it was still zeroed out.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: BlaineEXE, spowelljr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 9, 2023
@spowelljr spowelljr merged commit 2b31e76 into kubernetes:master Jun 9, 2023
@BlaineEXE BlaineEXE deleted the qemu2-add-extra-disk-capability branch June 9, 2023 23:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants