-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to create extra disks on qemu2 vms #15887
Add ability to create extra disks on qemu2 vms #15887
Conversation
Hi @BlaineEXE. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for this contribution @BlaineEXE please check the lint and also plz put the output of minikube before/after this PR
e10357e
to
54201c1
Compare
Someone else may need to take up this PR. I have realized that my corporate Cisco VPN is preventing minikube with QEMU from pulling images, even with socket_vmnet. |
I was able to get it working using Here is output from minikube v1.29.0
|
And here is the output from my local-built version including changes in this PR
And the 3 disks I added are vd[bcd]
|
pkg/drivers/qemu/qemu.go
Outdated
} | ||
return nil | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a modified copy of pkg/drivers/kvm/disks.go:createExtraDisk.
Why not refactor the code to a generic helper?
pkg/drivers/qemu/qemu.go
Outdated
machineDir := filepath.Join(d.StorePath, "machines", d.GetMachineName()) | ||
diskFile := fmt.Sprintf("extra-disk-%d.raw", i) | ||
return filepath.Join(machineDir, diskFile) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a helper for this pkg/drivers/common.go:ExtraDiskPath
using different name format ("%s-%d.rawdisk", d.GetMachineName(), diskID).
The helper is used by both kvm2 and hyperkit drivers.
Is there a reason to use diffrent name for the extra disk when using qemu2?
This can lead to confusing behavior - starting with kvm2 and then qemu2 will create 2 extra disks with different names, instead of reusing the existing disks.
pkg/drivers/qemu/qemu.go
Outdated
|
||
if err := file.Truncate(util.ConvertMBToBytes(d.DiskSize)); err != nil { | ||
return errors.Wrap(err, "truncate") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works but running "qemu-img create -f raw name size" is simpler and
does the right thing. For example is always allocates the first 4k bytes
to allow detection of logical block size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that this comment and the one about making a generic helper for creating a raw disk (here) are mutually exclusive. I'd prefer to create a helper that doesn't rely on qemu tools. Ideally, generating all raw disks the same way will mean there is no behavior discrepancy between drivers.
Nir raised good points. I'll get back to this in a handful of days. |
@BlaineEXE I like the comments @nirs made and I agree with them :) |
54201c1
to
c56678b
Compare
632ce9e
to
d7ec25d
Compare
@medyagh, I got all the kinks worked out finally and ready for what I think should be final review. Thanks for taking a look :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, see comments for possible improments later.
for i := 0; i < d.ExtraDisks; i++ { | ||
// use a higher index for extra disks to reduce ID collision with current or future | ||
// low-indexed devices (e.g., firmware, ISO CDROM, cloud config, and network device) | ||
index := i + 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"reduce ID collision" does not sound very promissing. Can we eliminate collisons or
avoid specificing the index, letting qemu handle this?
I don't remember that I had to specify index for drives when using multiple disks
but I usually use libvirt so maybe libvirt handles this for me.
If the intent is to be able to locate the drive later inside the guest,
it is better to specify the drive serial, which will be available in the
guest via the udev links (e.g. /dev/disk/by-id/virtio-{serial}).
It will also be better to use -device
and -blockdev
instead of -drive
,
which I think is also required to set the serial (serial is set on
the device, not on the drive). I could not find any docs about converting
old style -drive
options to -device
and -blockdev
. Proably the best
way to do this right is to check how libvirt does this.
Anyway I think this can be improved later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to use bus=2,unit=<id>
parameters to use a different bus entirely, but those also collided with other devices like the CDROM drive in my local testing. This seemed like a simple (if fairly blunt) way of preventing that collision for other users and avoiding corner cases as best as possible if the cloud init drive or other options change in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think this is not the most robust way of solving this issue, my main concern is if a minikube cluster is created and user deletes the minikube config folder without properly deleting minikube...then in the next minikube run this would colide again?
could we ensure that minikube delete --all deletes the abandoned disks ? simmilar to the orpahned disks in docker driver we have a clean up mechanim for them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you try creating two clusters with extra disk and one without extra disk and see if there is a collision with the extra disks? And after deleting ensure that there are no disks left over
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still confused about how the config folder being deleted could result in problems. I'll go through the behavior I am seeing from minikube, and you can let me know if I'm missing what "config" folder you are talking about.
I don't have anything in my config folder other than an empty config.json
:
❯ cat ~/.minikube/config/config.json
{}
I create minikube clusters from CLI only; example:
out/minikube-darwin-arm64 -p minikube2 start --driver qemu --extra-disks 3
I have 3 minikube environments using -p
. The first 2 have 3 extra disks each, and the last has no extra disks.
❯ minikube -p minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
❯ minikube -p minikube2 status
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
❯ minikube -p minikube3 status
minikube3
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
The ~/.minikube/machines
dir has separate disks for each machine profile. The
❯ tree -h ~/.minikube/machines/
[ 224] /Users/blaine/.minikube/machines/
├── [ 416] minikube
│ ├── [328M] boot2docker.iso
│ ├── [3.2K] config.json
│ ├── [827M] disk.qcow2
│ ├── [4.5K] disk.qcow2.raw
│ ├── [1.6K] id_rsa
│ ├── [ 381] id_rsa.pub
│ ├── [ 20G] minikube-0.rawdisk
│ ├── [ 20G] minikube-1.rawdisk
│ ├── [ 20G] minikube-2.rawdisk
│ ├── [ 0] monitor
│ └── [ 6] qemu.pid
├── [ 416] minikube2
│ ├── [328M] boot2docker.iso
│ ├── [3.2K] config.json
│ ├── [804M] disk.qcow2
│ ├── [4.5K] disk.qcow2.raw
│ ├── [1.6K] id_rsa
│ ├── [ 381] id_rsa.pub
│ ├── [ 20G] minikube2-0.rawdisk
│ ├── [ 20G] minikube2-1.rawdisk
│ ├── [ 20G] minikube2-2.rawdisk
│ ├── [ 0] monitor
│ └── [ 6] qemu.pid
├── [ 320] minikube3
│ ├── [328M] boot2docker.iso
│ ├── [3.2K] config.json
│ ├── [ 11M] disk.qcow2
│ ├── [4.5K] disk.qcow2.raw
│ ├── [1.6K] id_rsa
│ ├── [ 381] id_rsa.pub
│ ├── [ 0] monitor
│ └── [ 6] qemu.pid
├── [1.6K] server-key.pem
└── [1.2K] server.pem
4 directories, 32 files
As an example, the machine config for profile minikube2
, located in the minikube2 subdir, looks like this:
❯ cat ~/.minikube/machines/minikube2/config.json
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "192.168.105.13",
"MachineName": "minikube2",
"SSHUser": "docker",
"SSHPort": 22,
"SSHKeyPath": "",
"StorePath": "/Users/blaine/.minikube",
"SwarmMaster": false,
"SwarmHost": "",
"SwarmDiscovery": "",
"EnginePort": 2376,
"FirstQuery": true,
"Memory": 6000,
"DiskSize": 20000,
"CPU": 2,
"Program": "qemu-system-aarch64",
"BIOS": false,
"CPUType": "host",
"MachineType": "virt",
"Firmware": "/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd",
"Display": false,
"DisplayType": "",
"Nographic": false,
"VirtioDrives": false,
"Network": "socket_vmnet",
"PrivateNetwork": "",
"Boot2DockerURL": "file:///Users/blaine/.minikube/cache/iso/arm64/minikube-v1.30.1-1685960108-16634-arm64.iso",
"CaCertPath": "",
"PrivateKeyPath": "",
"DiskPath": "/Users/blaine/.minikube/machines/minikube2/minikube2.img",
"CacheMode": "default",
"IOMode": "threads",
"UserDataFile": "",
"CloudConfigRoot": "",
"LocalPorts": "",
"MACAddress": "4a:7c:ba:dc:1a:ea",
"SocketVMNetPath": "/opt/homebrew/var/run/socket_vmnet",
"SocketVMNetClientPath": "/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client",
"ExtraDisks": 3 #### <--- extra disks
},
"DriverName": "qemu2",
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
"ArbitraryFlags": null,
"Dns": null,
"GraphDir": "",
"Env": null,
"Ipv6": false,
"InsecureRegistry": [
"10.96.0.0/12"
],
"Labels": null,
"LogLevel": "",
"StorageDriver": "",
"SelinuxEnabled": false,
"TlsVerify": false,
"RegistryMirror": [],
"InstallURL": "https://get.docker.com"
},
"SwarmOptions": {
"IsSwarm": false,
"Address": "",
"Discovery": "",
"Agent": false,
"Master": false,
"Host": "",
"Image": "",
"Strategy": "",
"Heartbeat": 0,
"Overcommit": 0,
"ArbitraryFlags": null,
"ArbitraryJoinFlags": null,
"Env": null,
"IsExperimental": false
},
"AuthOptions": {
"CertDir": "/Users/blaine/.minikube",
"CaCertPath": "/Users/blaine/.minikube/certs/ca.pem",
"CaPrivateKeyPath": "/Users/blaine/.minikube/certs/ca-key.pem",
"CaCertRemotePath": "",
"ServerCertPath": "/Users/blaine/.minikube/machines/server.pem",
"ServerKeyPath": "/Users/blaine/.minikube/machines/server-key.pem",
"ClientKeyPath": "/Users/blaine/.minikube/certs/key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath": "/Users/blaine/.minikube/certs/cert.pem",
"ServerCertSANs": null,
"StorePath": "/Users/blaine/.minikube"
}
},
"Name": "minikube2"
}
And the qemu processes that are running are using the correct disks for all 3 vms
❯ ps aux | grep qemu
blaine 82748 36.0 2.7 415963040 913104 ?? R 4:35PM 5:21.61 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a2:0b:5f:76:3c -netdev socket,id=net0,fd=3 -daemonize -drive file=/Users/blaine/.minikube/machines/minikube/minikube-0.rawdisk,index=10,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube/minikube-1.rawdisk,index=11,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube/minikube-2.rawdisk,index=12,media=disk,format=raw,if=virtio /Users/blaine/.minikube/machines/minikube/disk.qcow2
blaine 84109 43.3 4.8 415686416 1620032 ?? R 4:44PM 0:57.26 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube2/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube2/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube2/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:7c:ba:dc:1a:ea -netdev socket,id=net0,fd=3 -daemonize -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-0.rawdisk,index=10,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-1.rawdisk,index=11,media=disk,format=raw,if=virtio -drive file=/Users/blaine/.minikube/machines/minikube2/minikube2-2.rawdisk,index=12,media=disk,format=raw,if=virtio /Users/blaine/.minikube/machines/minikube2/disk.qcow2
blaine 84626 2.0 5.4 415555568 1803312 ?? S 4:48PM 0:12.60 qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/blaine/.minikube/machines/minikube3/boot2docker.iso -qmp unix:/Users/blaine/.minikube/machines/minikube3/monitor,server,nowait -pidfile /Users/blaine/.minikube/machines/minikube3/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:db:51:c6:b9:1d -netdev socket,id=net0,fd=3 -daemonize /Users/blaine/.minikube/machines/minikube3/disk.qcow2
If I delete the first minikube cluster, all disks are removed:
❯ out/minikube-darwin-arm64 -p minikube delete
🔥 Deleting "minikube" in qemu2 ...
💀 Removed all traces of the "minikube" cluster.
❯ tree -h ~/.minikube/machines
[ 192] /Users/blaine/.minikube/machines
├── [ 416] minikube2
│ ├── [328M] boot2docker.iso
│ ├── [3.2K] config.json
│ ├── [810M] disk.qcow2
│ ├── [4.5K] disk.qcow2.raw
│ ├── [1.6K] id_rsa
│ ├── [ 381] id_rsa.pub
│ ├── [ 20G] minikube2-0.rawdisk
│ ├── [ 20G] minikube2-1.rawdisk
│ ├── [ 20G] minikube2-2.rawdisk
│ ├── [ 0] monitor
│ └── [ 6] qemu.pid
├── [ 320] minikube3
│ ├── [328M] boot2docker.iso
│ ├── [3.2K] config.json
│ ├── [813M] disk.qcow2
│ ├── [4.5K] disk.qcow2.raw
│ ├── [1.6K] id_rsa
│ ├── [ 381] id_rsa.pub
│ ├── [ 0] monitor
│ └── [ 6] qemu.pid
├── [1.6K] server-key.pem
└── [1.2K] server.pem
3 directories, 21 files
I can still ssh to minikube2, lsblk shows vd[b-d] are the extra disks, and partprobe reads the disk successfully.
❯ minikube -p minikube2 ssh ✘ INT
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ hostname
minikube2
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
vda 254:0 0 327.8M 1 disk
vdb 254:16 0 19.5G 0 disk
vdc 254:32 0 19.5G 0 disk
vdd 254:48 0 19.5G 0 disk
vde 254:64 0 19.5G 0 disk
`-vde1 254:65 0 19.5G 0 part /var/lib/minishift
/var/lib/toolbox
/var/lib/minikube
/tmp/hostpath-provisioner
/tmp/hostpath_pv
/data
/var/lib/cni
/var/lib/kubelet
/var/tmp
/var/log
/var/lib/containers
/var/lib/buildkit
/var/lib/containerd
/var/lib/docker
/var/lib/boot2docker
/mnt/vde1
$ sudo partprobe /dev/vdb
minikube delete --all
deletes the remaining VMs
❯ out/minikube-darwin-arm64 delete --all
🔥 Deleting "minikube2" in qemu2 ...
💀 Removed all traces of the "minikube2" cluster.
🔥 Deleting "minikube3" in qemu2 ...
💀 Removed all traces of the "minikube3" cluster.
🔥 Successfully deleted all profiles
❯ tree -h ~/.minikube/machines
[ 128] /Users/blaine/.minikube/machines
├── [1.6K] server-key.pem
└── [1.2K] server.pem
1 directory, 2 files
minikube delete --all --purge
deletes the whole ~/.minikube
dir.
Does this sufficiently show that disks are handled correctly in the case of multiple differently-configured clusters?
Hi @medyagh. Do you have a few minutes to give what I hope will be a final once-over on this PR? |
Hi @medyagh. I got busy with other things for a while and thought I'd check back in on this |
Is there anyone who can take an updated look at this PR? It's going on 2 months waiting on final approval. |
@BlaineEXE thank you for the patience on the waiting, do you mind trying it with the socket_vmnet network as well ? and if it does not work for one of them we should make sure the user is warned that it only works with one network driver |
I tested it. Extra disks are added when network is |
Actually, it looks like I was finally able to get socket_vmnet to work, at least this once -- and with a 2-node cluster! I'm not sure what magic made it work.
|
for i := 0; i < d.ExtraDisks; i++ { | ||
// use a higher index for extra disks to reduce ID collision with current or future | ||
// low-indexed devices (e.g., firmware, ISO CDROM, cloud config, and network device) | ||
index := i + 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think this is not the most robust way of solving this issue, my main concern is if a minikube cluster is created and user deletes the minikube config folder without properly deleting minikube...then in the next minikube run this would colide again?
could we ensure that minikube delete --all deletes the abandoned disks ? simmilar to the orpahned disks in docker driver we have a clean up mechanim for them
@medyagh that seems like a good concern. I'm not sure exactly what config folder you are asking about. The one I'm most familiar with is the default
|
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Add the ability to create and attach extra disks to qemu2 vms. Signed-off-by: Blaine Gardner <[email protected]>
f2e4be4
to
12c4bf5
Compare
kvm2 driver with docker runtime
Times for minikube start: 52.2s 50.3s 52.4s 49.4s 51.7s Times for minikube ingress: 27.1s 27.2s 27.6s 27.2s 24.7s docker driver with docker runtime
Times for minikube start: 25.0s 25.4s 25.2s 25.3s 25.2s Times for minikube ingress: 20.9s 20.3s 21.9s 20.4s 20.9s docker driver with containerd runtime
Times for minikube start: 22.7s 20.4s 23.8s 23.9s 23.0s Times for minikube ingress: 31.3s 47.3s 31.4s 30.4s 31.3s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
From an offline discussion:
No. The indexes are separate per I also just verified there is no disk overlap. I created 2 clusters with extra disks and wrote random data to the first 2 sectors of |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BlaineEXE, spowelljr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Add the ability to create and attach extra disks to qemu2 vms.