Skip to content

RYU OpenStack environment VM image file HOWTO

ykaneko edited this page Oct 23, 2012 · 12 revisions

RYU OpenStack Nova environment VM image file HOWTO

1. Introduction

It's not easy to set up an OpenStack Nova environment. This VM image file enables you to easily set up Nova environment with Ryu in your desktop machine. Several KVM instances run on your machine. They work like a multi-node OpenStack environment. We tested this VM image file on ubuntu12.04 (you need to install virtinst package). But the file might work fine too on other distributions.

A multi-node environment consists of a single all-in-one node and several compute nodes. The all-in-one node provides all the services of Nova (also works as a compute node). You can add as many compute nodes as you like. We give an instruction how to set up a multi-node environment consisting of two nodes: one all-in-one node and one compute node.

Compute nodes run Qemu VMs. They are compute resource that the OpenStack Nova environment provides. In another way, you manage Qemu VMs with ec2-* commands.

2. Getting the VM image file

Open the following URL with a browser:

http://sourceforge.net/projects/ryu/files/vmimages/ryuvm.qcow2/download

The downloading automatically starts (the size is 217.9 MB).

Then you copy it to appropriate places:

$ sudo cp ryuvm.qcow2 /var/lib/libvirt/images/ryu1.qcow2
$ sudo cp ryuvm.qcow2 /var/lib/libvirt/images/ryu2.qcow2

3. Setting up an all-in-on node VM

Use 'virt-install' tool:

$ sudo virt-install --name ryu1 --ram 1024 --vcpus 1 --import --os-type linux \
  --os-variant ubuntuprecise --disk path=/var/lib/libvirt/images/ryu1.qcow2 \
  --network network=default --hvm --virt-type kvm --arch x86_64 --nographics

A new VM is created and automatically starts up. You should see a login prompt. When "Escape character is ^]" message shows up, press the Enter key:

Starting install...
Creating domain...
Connected to domain ryuvm
Escape character is ^]    <------------- press [Enter]
Ubuntu 12.04 LTS ryu ttyS0
ryu login:

Log in with the username 'ubuntu' and the password 'ubuntu'

##4. Configurating the all-in-one node VM

First, you need to configure the IP address; change /etc/network/intarefaces file to use a static IP instead of DHCP.

$ sudo vi /etc/network/intarefaces

This instruction uses 192.168.122/24 subnet and uses 192.168.122.10 for the all-in-one node. You need to modify /etc/network/interfaces file in the following way:

iface eth0 inet static
     address 192.168.122.10
     netmask 255.255.255.0
     gateway 192.168.122.1
     dns-nameservers 192.168.122.1

Secondly, change the hostname ('ryu' by default). This is optional though.

$ sudo sh -c 'echo ryu1 > /etc/hostname'
$ sudo sed -i 's/ryu/ryu1/g' /etc/hosts

Then reboot the VM.

$ sudo reboot

##5. Starting devstack

We use the devstack script (http://devstack.org), a shell script to build complete OpenStack development environments.

The VM starts up, again log in with "ubuntu" user and ""ubuntu" password. You need to modify ~/devstack/localrc file to set "SERVICE_HOST" valuable:

$ cd devstack
$ vi localrc

You need to set SERVICE_HOST to the IP address of the all-in-one node VM. We need to use 192.168.122.10. Make sure localrc file includes the following line:

SERVICE_HOST=192.168.122.10

Now you are ready to start devstack:

$ ./stack.sh

##6. Setting up compute node(s)

You can add a compute node in the same way as you did for the all-in-one node. You need to use a different IP address in the same subnet. For example, you can use the following etc/network/interfaces file for the second node:

iface eth0 inet static
     address 192.168.122.11
     netmask 255.255.255.0
     gateway 192.168.122.1
     dns-nameservers 192.168.122.1

You also need to modify ~/devstack/localrc file to set "SERVICE_HOST" valuable. Note that you need to set SERVICE_HOST to the IP address of the all-in-one node VM.

##7. Enjoying OpenStack Nova environment

After succesful run of stack.sh on the all-in-one node (i.e. step 5), you'll have eucarc and openrc in the devstack directory.

###GUI example

  • Run your web browser and connect to http://192.168.122.10. (you need to use "ip address of all-in-one" here). Then you'll see the dashboard login screen.
  • login as admin user: admin pass: admin Now you can operate on it.
  • press project tab and select project name to demo or mode
  • press instances & volumes button
  • press launch instance Then you'll have a dialog
  • Input server name and press launch instance button (customize other parameter as you like) Then you'll have a guest VM instance in demo/mode
  • wait the instance boots up
  • press the instance name, then switches to instance the detail screen
  • press VNC tab, then you'll have the login console of the guest VM instance
  • operate in the guest VM instance. do what you want

###Command line example On all-in-one node Note that In this example euca2ools is used. Off course nova-client can also be used if you like. Please refer to devstack and nova document for details.

$ cd devstack
$ ls eucarc openrc
eucarc  openrc
$ . ./eucarc admin admin
$ euca-describe-availability-zones
AVAILABILITYZONE        nova    available
$ euca-describe-availability-zones verbos
AVAILABILITYZONE        nova    available
AVAILABILITYZONE        |- ryu-all
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2012-05-16 17:19:18
AVAILABILITYZONE        | |- nova-cert  enabled :-) 2012-05-16 17:19:18
AVAILABILITYZONE        | |- nova-volume        enabled :-) 2012-05-16 17:19:19
AVAILABILITYZONE        | |- nova-scheduler     enabled :-) 2012-05-16 17:19:23
AVAILABILITYZONE        | |- nova-network       enabled :-) 2012-05-16 17:19:23
AVAILABILITYZONE        | |- nova-consoleauth   enabled :-) 2012-05-16 17:19:17
AVAILABILITYZONE        |- ryu-comp-1
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2012-05-16 17:19:21
AVAILABILITYZONE        |- ryu-comp-2
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2012-05-16 17:19:17

NOTE: the results depends on your setup

$ euca-describe-images
IMAGE   aki-00000001    None (cirros-0.3.0-x86_64-uec-kernel)         available                           public                  kernel                  instance-store
IMAGE   ari-00000002    None (cirros-0.3.0-x86_64-uec-ramdisk)        available       public                  ramdisk                 instance-store
IMAGE   ami-00000003    None (cirros-0.3.0-x86_64-uec)          available      public                   machine aki-00000001    ari-00000002    instance-store

Run the guest VM instance by "demo" user

$ . ./eucarc demo demo
$ euca-run-instance -t m1.tiny ami-00000003

Here ami-00000003 is obtained by the above euca-describe-images and repeat this as you like

If you'd like to run on specificied compute node, you can use $ euca-run-instance -t m1.tiny -z nova:$NODE ami-00000003 # -z : # the zone is obtained by euca-describe-availability-zone # the compute node name is obtained by euca-describe-availability-zone verbose # in the above example, ryu-all, ryu-comp-1, ryu-comp-2

Run the instance by mode user like

 $ . ./eucarc mode mode
 $ euca-run-instance -t m1.tiny ami-00000003

Now you have instances in demo/mode tenants. Operate on them as usual.