Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add migration capability to bypass the shared memory
When the migration capability 'bypass-shared-memory' is set, the shared memory will be bypassed when migration. It is the key feature to enable several excellent features for the qemu, such as qemu-local-migration, qemu-live-update, extremely-fast-save-restore, vm-template, vm-fast-live-clone, yet-another-post-copy-migration, etc.. The philosophy behind this key feature and the advanced key features is that a part of the memory management is separated out from the qemu, and let the other toolkits such as libvirt, runv(https://github.com/hyperhq/runv/) or the next qemu-cmd directly access to it, manage it, provide features to it. The hyperhq(http://hyper.sh http://hypercontainer.io/) introduced the feature vm-template(vm-fast-live-clone) to the hyper container for several months, it works perfect. (see hyperhq/runv#297) The feature vm-template makes the containers(VMs) can be started in 130ms and save 80M memory for every container(VM). So that the hyper containers are fast and high-density as normal containers. In current qemu command line, shared memory has to be configured via memory-object. Anyone can add a -mem-path-share to the qemu command line for combining with -mem-path for this feature. This patch doesn’t include this change of -mem-path-share. Advanced features: 1) qemu-local-migration, qemu-live-update Set the mem-path on the tmpfs and set share=on for it when start the vm. example: -object \ memory-backend-file,id=mem,size=128M,mem-path=/dev/shm/memory,share=on \ -numa node,nodeid=0,cpus=0-7,memdev=mem when you want to migrate the vm locally (after fixed a security bug of the qemu-binary, or other reason), you can start a new qemu with the same command line and -incoming, then you can migrate the vm from the old qemu to the new qemu with the migration capability 'bypass-shared-memory' set. The migration will migrate the device-state *ONLY*, the memory is the origin memory backed by tmpfs file. 2) extremely-fast-save-restore the same above, but the mem-path is on the persistent file system. 3) vm-template, vm-fast-live-clone the template vm is started as 1), and paused when the guest reaches the template point(example: the guest app is ready), then the template vm is saved. (the qemu process of the template can be killed now, because we need only the memory and the device state files (in tmpfs)). Then we can launch one or multiple VMs base on the template vm states, the new VMs are started without the “share=on”, all the new VMs share the initial memory from the memory file, they save a lot of memory. all the new VMs start from the template point, the guest app can go to work quickly. The new VM booted from template vm can’t become template again, if you need this special feature, you can write a cloneable-tmpfs kernel module for it. The libvirt toolkit can’t manage vm-template currently, in the hyperhq/runv, we use qemu wrapper script to do it. I hope someone add “libvrit managed template” feature to libvirt. 4) yet-another-post-copy-migration It is a possible feature, no toolkit can do it well now. Using nbd server/client on the memory file is reluctantly Ok but inconvenient. A special feature for tmpfs might be needed to fully complete this feature. No one need yet another post copy migration method, but it is possible when some crazy man need it. Signed-off-by: Lai Jiangshan <[email protected]>
- Loading branch information
This initialization is not necessary.