The purpose of this guide is to provide users with step by step guide to install OpenNebula using LXC as the way to create virtual machines (VMs).
After following this guide, users will have a working OpenNebula with graphical interface (Sunstone), at least one host and a running VM.
Throughout the installation there are two separate roles: frontend
and nodes
. The frontend server will execute the OpenNebula services, and the nodes will be used to execute virtual machines.
We built and tested these drivers with both the frontend and the nodes installed on Ubuntu 14.04 (Trusty Tahr) and Debian 8 (Jessie), the 4 combinations work OK.
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.
OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service. OpenNebula is free and open-source software, subject to the requirements of the Apache License version 2.
Commands prefixed by # are meant to be run as root. Commands prefixed by $ must be run as oneadmin.
####Add key for repository:
$ wget -q -O- http://downloads.opennebula.org/repo/ubuntu/repo.key | apt-key add -
####Add this line at the end of /etc/apt/sources.list:
deb http://downloads.opennebula.org/repo/4.14/ubuntu/14.04/ stable opennebula
##For Debian
####Add key for repository:
$ wget -q -O- http://downloads.opennebula.org/repo/debian/repo.key | apt-key add -
####Add this line at the end of /etc/apt/sources.list:
deb http://downloads.opennebula.org/repo/4.14/debian/8/ stable opennebula
####Issue an update:
# apt-get update
# apt-get install opennebula opennebula-sunstone nfs-kernel-server lxc debootstrap
There are two main processes that must be started, the main OpenNebula daemon: opennebula
, and the graphical user interface: opennebula-sunstone
.
For security reasons, Sunstone listens only in the loopback interface by default. In case you want to change this behavior, edit /etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.
Now restart Sunstone:
# service opennebula-sunstone restart
Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ and /var/log/one from the frontend to the worker nodes. To do so add the following at the end of /etc/exports in the frontend
:
/var/lib/one/ *(rw,sync,no_subtree_check,no_root_squash,crossmnt,nohide)
/var/log/one/ *(rw,sync,no_subtree_check,no_root_squash,crossmnt,nohide)
####Refresh NFS exports:
# service nfs-kernel-server restart
OpenNebula will need to SSH passwordlessly from any node (including the frontend
) to any other node. Set public key as authorized key:
# su - oneadmin
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
Add the following snippet to ~/.ssh/config so it doesn’t prompt to add the keys to the known_hosts file:
$ cat << EOT > ~/.ssh/config
Host *
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOT
$ chmod 600 ~/.ssh/config
Copy the lxc folder under vmm to the frontend
on /var/lib/one/remotes/vmm. Route to scripts located inside lxc, such as deploy, should be this one at the end: /var/lib/one/remotes/vmm/lxc/deploy.
Copy lxc.d and lxc-probes.d folders under im to the frontend
on /var/lib/one/remotes/im. Route to scripts located inside, such as mon_lxc.sh from lxc.d folder, should be this one: /var/lib/one/remotes/im/lxc.d/mon_lxc.sh.
Change user, group and permissions:
# chown -R oneadmin:oneadmin /var/lib/one/remotes/vmm/lxc /var/lib/one/remotes/im/lxc.d /var/lib/one/remotes/im/lxc-probes.d
# chmod -R 755 /var/lib/one/remotes/vmm/lxc /var/lib/one/remotes/im/lxc.d /var/lib/one/remotes/im/lxc-probes.d
Under Information Driver Configuration add this:
#-------------------------------------------------------------------------------
# LXC Information Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
#-------------------------------------------------------------------------------
IM_MAD = [
name = "lxc",
executable = "one_im_ssh",
arguments = "-r 3 -t 15 lxc" ]
#-------------------------------------------------------------------------------
Under Virtualization Driver Configuration add this:
#-------------------------------------------------------------------------------
# LXC Virtualization Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of actions performed at the same time
#-------------------------------------------------------------------------------
VM_MAD = [ name = "lxc",
executable = "one_vmm_exec",
arguments = "-t 15 -r 0 lxc",
type = "xml" ]
#-------------------------------------------------------------------------------
We are adding a configuration file example, you can check it.
Restart OpenNebula service.
# service opennebula restart
By default, Opennebula-sunstone doesn't automatically starts with the system. To change this, add service opennebula-sunstone start to /etc/rc.local.
##For Ubuntu
####Add key for OpenNebula repository:
# wget -q -O- http://downloads.opennebula.org/repo/ubuntu/repo.key | apt-key add -
####Add this line at the end of /etc/apt/sources.list:
deb http://downloads.opennebula.org/repo/4.14/ubuntu/14.04/ stable opennebula
##For Debian
####Add key for repository:
$ wget -q -O- http://downloads.opennebula.org/repo/debian/repo.key | apt-key add -
####Add this line at the end of /etc/apt/sources.list:
deb http://downloads.opennebula.org/repo/4.14/debian/8/ stable opennebula
####Issue an update
# apt-get update
# apt-get install opennebula-node nfs-common bridge-utils xmlstarlet libpam-runtime bc at libvncserver0 libjpeg62 lxc
You can get SVNCterm from the svncterm repository in github and compile it following its straightforward instructions. Alternatively, for the case of Debian based distributions, you can download SVNCterm binary package from the GitHub repository and install it using dpkg or other package manager.
# dpkg -i foo/addon-lxcone-master/svncterm_1.2-1_amd64.deb
### 2.3. Network configuration
Turn down your network interface
# ifdown eth0
Configure the new bridge in /etc/network/interfaces. This is our configuration
This is our config:
# This file describes the network interfaces available on your system and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp
auto br0
iface br0 inet static
address 10.8.91.88
netmask 255.255.255.0
gateway 10.8.91.1
bridge_ports eth0
bridge_fd 0
bridge_maxwait 0
Turn up the new bridge
# ifup br0
eth0 was our primary network adapter, if the name is different in your case, remember to change it in bridge_ports option. In Ubuntu when you install lxc it creates a new brige with the name lxcbr0 we don't use it you can use this one, but it's not configured in /etc/network/interfaces. Replace 10.8.91.88 for your node IP.
# mkdir -p /var/log/one
Add this line to /etc/fstab:
192.168.1.1:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,auto
192.168.1.1:/var/log/one/ /var/log/one/ nfs soft,intr,rsize=8192,wsize=8192,auto
Replace 192.168.1.1 with the frontend's ip address.
Mount the directory
# mount /var/lib/one
# mount /var/log/one
Now, the frontend
should be able to SSH inside the host without password using the oneadmin user.
Node will automatically try to mount /var/lib/one every time it starts. This is recommended, specially if you are using shared storage, but an error will occur if the frontend is down when the node boots up. If this happen, manually mount /var/lib/one and everything should be fine or write as noauto instead of auto at the end of /etc/fstab line added previously.
Add the following line to /etc/sudoers
oneadmin ALL= NOPASSWD: ALL
Type this command to allow oneadmin to execute scripts in the container directory
# chmod +rx /var/lib/lxc
Check if cgroup memory capability is available:
# cat /proc/cgroups| grep memory | awk '{ print $4 }'
A 0 indicates that capability is no loaded (1 indicates the oposite).
To manage memory on containers add cgroup argument to grub to activate those functionality. Add, in GRUB_CMDLINE_LINUX entry of /etc/default/grub file, the cgroup_enable=memory and swapaccount=1 parameters.
[...]
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
[...]
Regenerate grub config
# update-grub
Every File System image used by LXC through this driver will require one loop device. Because the default limit for this loop devices is 8, this needs to be increased.
Write options loop max_loop=64 to /etc/modprobe.d/local-loop.conf
Activate loop module automatically. Write loop at then end of /etc/modules.
Reboot host to enable changes from previous steps.
In case an image datastore wants to be created as LVM this steps will be needed.
To use LVM drivers, the system datastore must be shared. This sytem datastore will hold only the symbolic links to the block devices, so it will not take much space. See more details on the System Datastore Guide
It is worth noting that running virtual disk images will be created in Volume Groups that are hardcoded to be vg-one-<system_ds_id>. Therefore the nodes must have those Volume Groups pre-created and available for all possible system datastores.
# apt-get install lvm2 clvm
# pvcreate /dev/sdxx
sdxx is the disk or partition that will be used by LVM.
# vgcreate vg-one-SYSTEM_DATASTORE_ID /dev/sdxx
SYSTEM_DATASTORE_ID will be 0 if using the default system datastore
# apt-get install ceph
For this, just copy ceph.conf and the keyring for the user to /etc/ceph.
Add the following line to /etc/environment
CEPH_ARGS="--keyring /route/to/keyring/file --id user(one by default)"
Activate loop module automatically. Write rbd at then end of /etc/modules.
Reboot or load rbd module with modprobe rbd.
# lxc-create -t debian -B loop --fssize=3G -n name
We just created a 3Gb raw image with a linux container inside. The raw image file will be located at /var/lib/lxc/name/rootdev. name will be the name of the container.
First, be sure to copy the root password at the end of lxc-create and write lxc.autodev = 1 at the end of /var/lib/lxc/name/config
# lxc-start -n name -d
# lxc-console -n name
When logging into an ubuntu container from a debian host, it's possible to get stuck for a while in the tty indication screen, it'll work in no more than 2 minutes and you'll be able to log normally. Logging backwards isn't supported in trusty because it doesn't use systemd and jessie does.
Inside the container type:
# passwd
openssh-server, for example
Images created by LXC won't mount well, either in a loop device or a LV, in case LVM is being used. This happens because size of images created with LXC is not multiple of 512 bytes. Right now we are working on this, but a simple patch will be the following:
This can be done in different ways, one of them is the following command:
# qemu-img create -f raw image_name <SIZE_IN_MB>M
You need to have "qemu" installed for this to work. This package should be already installed because it's a dependency of opennebula-node. You should specify size a little bigger than fssize parameter when creating the image with lxc-create, just in case.
# dd if=/var/lib/lxc/**name**/rootdev of=image_name bs=512 conv=notrunc
Log in to http://192.168.1.1:9869/
address.
Replace 192.168.1.1 with the frontend
ip address.
The credentials are located in the frontend
inside /var/lib/one/.one/one_auth. You'll need to be oneadmin user to be able to read this file.
You can add one using sunstone under Virtual Resources --> Images --> ADD.
- Name.
- Type. Select OS.
- Image Location. In case the image is on a web server, Provide a Path can be used, just copy the URL.
Upload the image created by lxc, located in /var/lib/lxc/name/rootdev to OpenNebula.
![Adding an image file] (picts/Images.png)
The image file will be located by default in /var/lib/lxc/$name/rootdev. In case LVM is going to be used, remember to upload the image created in step 3.2.
Until now, we are only using the default datastore created by OpenNebula. Please, use this one.
You can add one using sunstone under Infrastructure --> Hosts --> ADD.
- Type. Select Custom.
- Write the host's Ip address where LXC is installed and configured. You can also write a hostname or DNS if previously configurated.
- Under Drivers
- Virtualization. Select Custom.
- Information. Select Custom.
- Custom VMM_MAD. Write lxc.
- Custom IM_MAD. Write lxc.
![Host configuration example] (picts/Host.png)
You can add one using sunstone under Infrastructure --> Virtual Networks --> ADD.
- Under General:
- Name
- Under Conf:
- Bridge. Write the name of the bridge previously created. br0 in this case.
- Under Addresses:
- Ip Start. This will be the first address in the pool.
- Size. Amount of IP addresses OpenNebula can assign after Ip Start.
- Under Context:
- Add gateway
After this, just click on the Create button.
You can add one using sunstone under Virtual Resources --> Templates --> ADD.
- Under General:
- Name
- Memory
- CPU
- Under Storage
- Select on Disk 0 the previously loaded OS image from LXC
- It is posible to add several more LVM and File System diks. This disks will be mounted inside the container under /media/DISK_ID
- Under Network
- Select none, one or many network interfaces. They will appear inside the container configured.
- Under Input/Output (In case VNC is required)
- Select VNC under graphics.
After this, just click on the Create button.
You can add one using sunstone under Virtual Resources --> Virtual Machines --> ADD. Select template and click Create. Select the Virtual Machine and click Deploy from the menu.
Every image attached to any container will be automatically mounted inside the container in /media/$DISK_ID. Doesn't matter if it was hot-attached or attached in the template and then deployed, it will always be mounted in that folder.
For you to be able to detach an image from a running container, this image needs to be mounted in the same place it was originally mounted (/media/$DISK_ID as specified before). Also, it can't be in use for any application in the container or it will purposely fail, you can make sure of this using lsof.
Nic's ID will match with the eth number inside the container. For example, if OpenNebula shows that the NIC you attached has an ID=3, this nic will be eth3 inside the container. NICs will already appear configured if you add them in the template and then deploy that template. If you hot-attached it, it won't be configured but it will appear, for you to be able to manually configure it. This happens because OpenNebula doesn't pass that info when hot attaching a NIC. Now, in case you hot attach a NIC, if you shutdown and then start this container again, it will appear configured, with the configuration specified by OpenNebula.
If I configure NICs inside the container using /etc/network/interfaces, will the container use this configuration or the one provided by OpenNebula?
It will use both. In case the configuration from OpenNebula matches the one inside the interfaces file, this will obviously be the configuration that the NIC will get. If not, the NIC will have two different configurations associated.
Yes, you can. There are some other actions that need to be done, but they will be executed regardless you issued the shutdown from OpenNebula or fron LXC. It might take up to 30 seconds to OpenNebula to change te container's status in case you power off the container from LXC.
Sometimes the noVNC client used by opennebula, somehow breaks the VNC server created in the node, this condition is periodically checked and when it happens the sever gets started again. Refresh your tab in the web browser after a few seconds.
Sergio Vega Gutiérrez ([email protected]) José Manuel de la Fé Herrero ([email protected])