Skip to content

Tutorial 6: Multi tenant DCN Demonstrator SDN 2016

Eduard Grasa edited this page Nov 21, 2016 · 18 revisions

Introduction

The goal of this tutorial is to setup a small RINA-based data centre network capable of supporting isolated computing/storage "slices" for different tenants. The design exploits RINA's recursion capabilities to build a scalable DC architecture using the same building block (the DIF) in two very different scopes: the DC fabric and the tenant layers. This scenario, demonstrated at the SDN World Congress 2016, has been setup with the demonstrator, using an IRATI image with a pristine-1.5 branch snapshot.

Tutorial 6 scenario: top view

The image above shows different details of the DC configuration. The topmost part of the image shows the physical systems: the DC has 4 racks of 8 servers, interconnected via a Top of Rack (ToR) router. Each ToR router is connected to two spine routers, forming the DC fabric. The design can scale up by just adding more servers per rack, more racks and more spines (organised in hierarchies of spines if the DC grows very large). The middle part of the figure shows the protocol layers in the design:

  • The DC Fabric DIF provides connectivity over the leaf-spine fabric, which can be treated as a single distributed resource allocation domain. It runs multi-path forwarding policies to exploit the link diversity.
  • Multiple VPN DIFs (4 in this tutorial) float on top of the DC fabric DIF, providing isolated computing/storage domains dedicated to customers. Each tenant DIF can be tailored to the customer requirements by plugging in different policies.

The lower part of the image shows the connectivity graph of the IPC Processes in each DIF. The DC Fabric DIF is depicted at the left, while VPN DIFs on the right (in this simple tutorial all VPN DIFs have the same connectivity graph).

1. Getting the demonstrator

The Demonstrator is command-line tool (gen.py) which allows the user to easily try and test the IRATI stack in a multi-node scenario. Each node is implemented using a light Virtual Machine (VM), run under the control of the QEMU hypervisor. All the VMs are run locally without any risk for your PC, so you don't need a dedicated machine or multiple physical machines.

To install the demonstrator you need a physical Linux machine with support for QEMU and KVM. To obtain the demonstrator, just clone its repository:

git clone https://github.com/IRATI/demonstrator.git

After that cd into the demonstrator directory.

2. Creating the DC scenario demonstrator configuration file

Enter the examples directory and create a file called dcvpns.conf, with the following contents.

eth 110 100Mbps tor1 spine1
eth 120 100Mbps tor1 spine2
eth 11 25Mbps s11 tor1
eth 12 25Mbps s12 tor1
eth 13 25Mbps s13 tor1
eth 14 25Mbps s14 tor1
eth 15 25Mbps s15 tor1
eth 16 25Mbps s16 tor1
eth 17 25Mbps s17 tor1
eth 18 25Mbps s18 tor1
eth 210 100Mbps tor2 spine1
eth 220 100Mbps tor2 spine2
eth 21 25Mbps s21 tor2
eth 22 25Mbps s22 tor2
eth 23 25Mbps s23 tor2
eth 24 25Mbps s24 tor2
eth 25 25Mbps s25 tor2
eth 26 25Mbps s26 tor2
eth 27 25Mbps s27 tor2
eth 28 25Mbps s28 tor2
eth 310 100Mbps tor3 spine1
eth 320 100Mbps tor3 spine2
eth 31 25Mbps s31 tor3
eth 32 25Mbps s32 tor3
eth 33 25Mbps s33 tor3
eth 34 25Mbps s34 tor3
eth 35 25Mbps s35 tor3
eth 36 25Mbps s36 tor3
eth 37 25Mbps s37 tor3
eth 38 25Mbps s38 tor3
eth 410 100Mbps tor4 spine1
eth 420 100Mbps tor4 spine2
eth 41 25Mbps s41 tor4
eth 42 25Mbps s42 tor4
eth 43 25Mbps s43 tor4
eth 44 25Mbps s44 tor4
eth 45 25Mbps s45 tor4
eth 46 25Mbps s46 tor4
eth 47 25Mbps s47 tor4
eth 48 25Mbps s48 tor4

# DIF dcfabric  
dif dcfabric tor1 110 120
dif dcfabric tor2 210 220
dif dcfabric tor3 310 320
dif dcfabric tor4 410 420
dif dcfabric spine1 110 210 310 410
dif dcfabric spine2 120 220 320 420

# DIF VPN1
dif vpn1 s11 11
dif vpn1 s12 12
dif vpn1 s13 13
dif vpn1 s14 14
dif vpn1 tor1 11 12 13 14 dcfabric
dif vpn1 s21 21
dif vpn1 s22 22
dif vpn1 s23 23
dif vpn1 s24 24
dif vpn1 tor2 21 22 23 24 dcfabric

# DIF VPN2
dif vpn2 s31 31
dif vpn2 s32 32
dif vpn2 s33 33
dif vpn2 s34 34
dif vpn2 tor3 31 32 33 34 dcfabric
dif vpn2 s41 41
dif vpn2 s42 42
dif vpn2 s43 43
dif vpn2 s44 44
dif vpn2 tor4 41 42 43 44 dcfabric

# DIF VPN3
dif vpn3 s15 15
dif vpn3 s16 16
dif vpn3 s17 17
dif vpn3 s18 18
dif vpn3 tor1 15 16 17 18 dcfabric
dif vpn3 s25 25
dif vpn3 s26 26
dif vpn3 s27 27
dif vpn3 s28 28
dif vpn3 tor2 25 26 27 28  dcfabric

# DIF VPN4
dif vpn4 s35 35
dif vpn4 s36 36
dif vpn4 s37 37
dif vpn4 s38 38
dif vpn4 tor3 35 36 37 38 dcfabric
dif vpn4 s45 45
dif vpn4 s46 46
dif vpn4 s47 47
dif vpn4 s48 48
dif vpn4 tor4 45 46 47 48 dcfabric

#Policies
#Multipath FABRIC
policy dcfabric spine1,spine2 rmt.pff multipath
policy dcfabric spine1,spine2 routing link-state routingAlgorithm=ECMPDijkstra
policy dcfabric * rmt cas-ps q_max=1000
policy dcfabric * efcp.*.dtcp cas-ps

#Application to DIF mappings
appmap vpn1 traffic.generator.server 1
appmap vpn1 rina.apps.echotime.server 1
appmap vpn2 traffic.generator.server 1
appmap vpn2 rina.apps.echotime.server 1
appmap vpn3 traffic.generator.server 1
appmap vpn3 rina.apps.echotime.server 1
appmap vpn4 traffic.generator.server 1
appmap vpn4 rina.apps.echotime.server 1

The first part of the file specifies the ethernet links between the different VMs in the scenario, along with its VLAN and bandwidth restriction if any (in this setup we rate-limit links between servers and ToRs to 25Mbps and between ToRs and spines to 100 Mbps). The second part specifies how DIFs are stacked in each machine. The third part specifies non-default policies for each DIF (otherwise the demonstrator uses a default DIF configuration).

3. Generating the configuration files for each machine and demonstrator scripts

Go back to the main demonstrator folder, and type the following command:

./gen.py -m 1024 -e full-mesh --vhost -f virtio-net-pci -c examples/dcvpns.conf

This will generate a number of configuration files and two scripts: up.sh and down.sh. In this case we are using VMs with 1024 MBs of RAM, you can run smaller machines (512 or even 256 if you don't plan to run rina-tgen should be ok). We are also telling the script to enroll all IPCPs to each other (full-mesh option) as long as they have connectivity via an N-1 DIF. The text output after running this script should be the following one:

You want to run a lot of nodes, so it's better if I give each node some time to boot (since the boot is CPU-intensive)
I am going to enroll spine1 to DIF dcfabric against neighbor tor2, through lower DIF 210
I am going to enroll spine1 to DIF dcfabric against neighbor tor3, through lower DIF 310
I am going to enroll spine1 to DIF dcfabric against neighbor tor4, through lower DIF 410
I am going to enroll spine1 to DIF dcfabric against neighbor tor1, through lower DIF 110
I am going to enroll spine2 to DIF dcfabric against neighbor tor1, through lower DIF 120
I am going to enroll spine2 to DIF dcfabric against neighbor tor3, through lower DIF 320
I am going to enroll spine2 to DIF dcfabric against neighbor tor4, through lower DIF 420
I am going to enroll spine2 to DIF dcfabric against neighbor tor2, through lower DIF 220
I am going to enroll s38 to DIF vpn4 against neighbor tor3, through lower DIF 38
I am going to enroll s45 to DIF vpn4 against neighbor tor4, through lower DIF 45
I am going to enroll s35 to DIF vpn4 against neighbor tor3, through lower DIF 35
I am going to enroll s37 to DIF vpn4 against neighbor tor3, through lower DIF 37
I am going to enroll s36 to DIF vpn4 against neighbor tor3, through lower DIF 36
I am going to enroll s46 to DIF vpn4 against neighbor tor4, through lower DIF 46
I am going to enroll s47 to DIF vpn4 against neighbor tor4, through lower DIF 47
I am going to enroll tor3 to DIF vpn4 against neighbor tor4, through lower DIF dcfabric
I am going to enroll s48 to DIF vpn4 against neighbor tor4, through lower DIF 48
I am going to enroll s13 to DIF vpn1 against neighbor tor1, through lower DIF 13
I am going to enroll s12 to DIF vpn1 against neighbor tor1, through lower DIF 12
I am going to enroll s11 to DIF vpn1 against neighbor tor1, through lower DIF 11
I am going to enroll s14 to DIF vpn1 against neighbor tor1, through lower DIF 14
I am going to enroll s22 to DIF vpn1 against neighbor tor2, through lower DIF 22
I am going to enroll s23 to DIF vpn1 against neighbor tor2, through lower DIF 23
I am going to enroll s21 to DIF vpn1 against neighbor tor2, through lower DIF 21
I am going to enroll tor1 to DIF vpn1 against neighbor tor2, through lower DIF dcfabric
I am going to enroll s24 to DIF vpn1 against neighbor tor2, through lower DIF 24
I am going to enroll tor3 to DIF vpn2 against neighbor tor4, through lower DIF dcfabric
I am going to enroll s34 to DIF vpn2 against neighbor tor3, through lower DIF 34
I am going to enroll s31 to DIF vpn2 against neighbor tor3, through lower DIF 31
I am going to enroll s33 to DIF vpn2 against neighbor tor3, through lower DIF 33
I am going to enroll s32 to DIF vpn2 against neighbor tor3, through lower DIF 32
I am going to enroll s44 to DIF vpn2 against neighbor tor4, through lower DIF 44
I am going to enroll s41 to DIF vpn2 against neighbor tor4, through lower DIF 41
I am going to enroll s42 to DIF vpn2 against neighbor tor4, through lower DIF 42
I am going to enroll s43 to DIF vpn2 against neighbor tor4, through lower DIF 43
I am going to enroll s18 to DIF vpn3 against neighbor tor1, through lower DIF 18
I am going to enroll s25 to DIF vpn3 against neighbor tor2, through lower DIF 25
I am going to enroll s17 to DIF vpn3 against neighbor tor1, through lower DIF 17
I am going to enroll s16 to DIF vpn3 against neighbor tor1, through lower DIF 16
I am going to enroll s15 to DIF vpn3 against neighbor tor1, through lower DIF 15
I am going to enroll tor1 to DIF vpn3 against neighbor tor2, through lower DIF dcfabric
I am going to enroll s27 to DIF vpn3 against neighbor tor2, through lower DIF 27
I am going to enroll s26 to DIF vpn3 against neighbor tor2, through lower DIF 26
I am going to enroll s28 to DIF vpn3 against neighbor tor2, through lower DIF 28 

If you type ls to list the contents of the demonstrator folder, you should see a significant number of configuration files:

README.md	     normal.s24.vpn1.dif	 normal.tor2.vpn1.dif	   s32.ipcm.conf       shimeth.s24.24.dif         shimeth.spine2.320.dif  shimeth.tor3.34.dif
TODO		     normal.s25.vpn3.dif	 normal.tor2.vpn3.dif	   s33.ipcm.conf       shimeth.s25.25.dif      shimeth.spine2.420.dif  shimeth.tor3.35.dif
access.sh	     normal.s26.vpn3.dif	 normal.tor3.dcfabric.dif  s34.ipcm.conf       shimeth.s26.26.dif      shimeth.tor1.11.dif     shimeth.tor3.36.dif
buildroot	     normal.s27.vpn3.dif	 normal.tor3.vpn2.dif	   s35.ipcm.conf       shimeth.s27.27.dif      shimeth.tor1.110.dif    shimeth.tor3.37.dif
clean.sh	     normal.s28.vpn3.dif	 normal.tor3.vpn4.dif	   s36.ipcm.conf       shimeth.s28.28.dif      shimeth.tor1.12.dif     shimeth.tor3.38.dif
da.map		     normal.s31.vpn2.dif	 normal.tor4.dcfabric.dif  s37.ipcm.conf       shimeth.s31.31.dif      shimeth.tor1.120.dif    shimeth.tor4.41.dif
down.sh		     normal.s32.vpn2.dif	 normal.tor4.vpn2.dif	   s38.ipcm.conf       shimeth.s32.32.dif      shimeth.tor1.13.dif     shimeth.tor4.410.dif
enroll.py	     normal.s33.vpn2.dif	 normal.tor4.vpn4.dif	   s41.ipcm.conf       shimeth.s33.33.dif      shimeth.tor1.14.dif     shimeth.tor4.42.dif
examples	     normal.s34.vpn2.dif	 overlay		   s42.ipcm.conf       shimeth.s34.34.dif      shimeth.tor1.15.dif     shimeth.tor4.420.dif
gen.conf	     normal.s35.vpn4.dif	 overlays		   s43.ipcm.conf       shimeth.s35.35.dif      shimeth.tor1.16.dif     shimeth.tor4.43.dif
gen.env		     normal.s36.vpn4.dif	 s11.ipcm.conf		   s44.ipcm.conf       shimeth.s36.36.dif      shimeth.tor1.17.dif     shimeth.tor4.44.dif
gen.map		     normal.s37.vpn4.dif	 s12.ipcm.conf		   s45.ipcm.conf       shimeth.s37.37.dif      shimeth.tor1.18.dif     shimeth.tor4.45.dif
gen.py		     normal.s38.vpn4.dif	 s13.ipcm.conf		   s46.ipcm.conf       shimeth.s38.38.dif      shimeth.tor2.21.dif     shimeth.tor4.46.dif
gen_templates.py     normal.s41.vpn2.dif	 s14.ipcm.conf		   s47.ipcm.conf       shimeth.s41.41.dif      shimeth.tor2.210.dif    shimeth.tor4.47.dif
gen_templates.pyc    normal.s42.vpn2.dif	 s15.ipcm.conf		   s48.ipcm.conf       shimeth.s42.42.dif      shimeth.tor2.22.dif     shimeth.tor4.48.dif
mac2ifname.c	     normal.s43.vpn2.dif	 s16.ipcm.conf		   scripts	       shimeth.s43.43.dif      shimeth.tor2.220.dif    spine1.ipcm.conf
normal.s11.vpn1.dif  normal.s44.vpn2.dif	 s17.ipcm.conf		   shimeth.s11.11.dif  shimeth.s44.44.dif      shimeth.tor2.23.dif     spine2.ipcm.conf
normal.s12.vpn1.dif  normal.s45.vpn4.dif	 s18.ipcm.conf		   shimeth.s12.12.dif  shimeth.s45.45.dif      shimeth.tor2.24.dif     test.data
normal.s13.vpn1.dif  normal.s46.vpn4.dif	 s21.ipcm.conf		   shimeth.s13.13.dif  shimeth.s46.46.dif      shimeth.tor2.25.dif     tor1.ipcm.conf
normal.s14.vpn1.dif  normal.s47.vpn4.dif	 s22.ipcm.conf		   shimeth.s14.14.dif  shimeth.s47.47.dif      shimeth.tor2.26.dif     tor2.ipcm.conf
normal.s15.vpn3.dif  normal.s48.vpn4.dif	 s23.ipcm.conf		   shimeth.s15.15.dif  shimeth.s48.48.dif      shimeth.tor2.27.dif     tor3.ipcm.conf
normal.s16.vpn3.dif  normal.spine1.dcfabric.dif  s24.ipcm.conf		   shimeth.s16.16.dif  shimeth.spine1.110.dif  shimeth.tor2.28.dif     tor4.ipcm.conf
normal.s17.vpn3.dif  normal.spine2.dcfabric.dif  s25.ipcm.conf		   shimeth.s17.17.dif  shimeth.spine1.210.dif  shimeth.tor3.31.dif     up.sh
normal.s18.vpn3.dif  normal.tor1.dcfabric.dif	 s26.ipcm.conf		   shimeth.s18.18.dif  shimeth.spine1.310.dif  shimeth.tor3.310.dif    update_vm.sh
normal.s21.vpn1.dif  normal.tor1.vpn1.dif	 s27.ipcm.conf		   shimeth.s21.21.dif  shimeth.spine1.410.dif  shimeth.tor3.32.dif
normal.s22.vpn1.dif  normal.tor1.vpn3.dif	 s28.ipcm.conf		   shimeth.s22.22.dif  shimeth.spine2.120.dif  shimeth.tor3.320.dif
normal.s23.vpn1.dif  normal.tor2.dcfabric.dif	 s31.ipcm.conf		   shimeth.s23.23.dif  shimeth.spine2.220.dif  shimeth.tor3.33.dif

4. Running the scenario

Just execute the up.sh script. This will create all required software bridges and virtual Ethernet interfaces in the hosts, create the VMs, copy the configuration files to each machine, run RINA and trigger enrollments. Since this scenario is quite large it may take something in the 5-10 minutes region for the up.sh script to finalize. Some exercpts of the feedback provided by this script while it is executing:

Creation of software bridges and virtual Ethernet interfaces ...

...
+ sudo brctl addbr rbr46
+ sudo ip link set rbr46 up
+ sudo brctl addbr rbr47
+ sudo ip link set rbr47 up
+ sudo brctl addbr rbr48
+ sudo ip link set rbr48 up
+ sudo ip tuntap add mode tap name s11.01
+ sudo ip link set s11.01 up
+ sudo brctl addif rbr11 s11.01
+ sudo tc qdisc add dev s11.01 root netem rate 25mbit
+ sudo ip tuntap add mode tap name tor1.01
+ sudo ip link set tor1.01 up
+ sudo brctl addif rbr11 tor1.01
+ sudo tc qdisc add dev tor1.01 root netem rate 25mbit
+ sudo ip tuntap add mode tap name spine1.01
+ sudo ip link set spine1.01 up
+ sudo brctl addif rbr110 spine1.01
+ sudo tc qdisc add dev spine1.01 root netem rate 100mbit
...

Creation of VMS ...

...
+ qemu-system-x86_64 -kernel ../buildroot/output/images/bzImage -append console=ttyS0 -initrd ../buildroot/output/images/rootfs.cpio -nographic -display none --enable-kvm -smp 2 -m 1024M -device virtio-net-pci,mac=00:0a:0a:0a:01:63,netdev=mgmt -netdev user,id=mgmt,hostfwd=tcp::2223-:22 -vga std -pidfile rina-1.pid -device virtio-net-pci,mac=00:0a:0a:0a:01:01,netdev=data1 -netdev tap,ifname=s11.01,id=data1,script=no,downscript=no,vhost=on
+ sleep 12
+ qemu-system-x86_64 -kernel ../buildroot/output/images/bzImage -append console=ttyS0 -initrd ../buildroot/output/images/rootfs.cpio -nographic -display none --enable-kvm -smp 2 -m 1024M -device virtio-net-pci,mac=00:0a:0a:0a:02:63,netdev=mgmt -netdev user,id=mgmt,hostfwd=tcp::2224-:22 -vga std -pidfile rina-2.pid -device virtio-net-pci,mac=00:0a:0a:0a:02:01,netdev=data1 -netdev tap,ifname=s12.01,id=data1,script=no,downscript=no,vhost=on
+ qemu-system-x86_64 -kernel ../buildroot/output/images/bzImage -append console=ttyS0 -initrd ../buildroot/output/images/rootfs.cpio -nographic -display none --enable-kvm -smp 2 -m 1024M -device virtio-net-pci,mac=00:0a:0a:0a:03:63,netdev=mgmt -netdev user,id=mgmt,hostfwd=tcp::2225-:22 -vga std -pidfile rina-3.pid -device virtio-net-pci,mac=00:0a:0a:0a:03:01,netdev=data1 -netdev tap,ifname=s13.01,id=data1,script=no,downscript=no,vhost=on
+ sleep 12
...

Copying files ot VMs ...

...
+ scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentityFile=buildroot/irati_rsa -r -P 2223  normal.s11.vpn1.dif shimeth.s11.11.dif da.map s11.ipcm.conf enroll.py root@localhost:
Warning: Permanently added '[localhost]:2223' (ECDSA) to the list of known hosts.
normal.s11.vpn1.dif       100% 6917     6.8KB/s   00:00    
shimeth.s11.11.dif        100%  101     0.1KB/s   00:00    
da.map                    100%  514     0.5KB/s   00:00    
s11.ipcm.conf             100%  935     0.9KB/s   00:00    
enroll.py                 100% 3149     3.1KB/s   00:00    
+ DONE=0
+ '[' 0 '!=' 0 ']'
+ '[' 0 '!=' 0 ']'
+ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentityFile=buildroot/irati_rsa -p 2223 root@localhost
...

Triggering enrollments ...

...
+ SUDO=
+ enroll.py --lower-dif 210 --dif dcfabric.DIF --ipcm-conf /etc/spine1.ipcm.conf --enrollee-name dcfabric.33.IPCP --enroller-name dcfabric.36.IPCP
Looking up identifier for IPCP dcfabric.33.IPCP
["b'Management Agent not started", '', 'Current IPC processes (id | name | type | state | Registered applications |  Port-ids of flows provided)', '    1 | eth.1.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 110 | dcfabric.33.IPCP-1-- | -', '     2 | eth.2.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 210 | dcfabric.33.IPCP-1-- | -', '    3 | eth.3.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 310 | dcfabric.33.IPCP-1-- | -', '    4 | eth.4.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 410 | dcfabric.33.IPCP-1-- | -', '    5 | dcfabric.33.IPCP:1:: | normal-ipc | ASSIGNED TO DIF dcfabric.DIF | - | -', '']
enroll-to-dif 5 dcfabric.DIF 210 dcfabric.36.IPCP 1

["b'DIF enrollment succesfully completed in 5 ms", '']
...

5. Accessing the machines

The demonstrator has a convenient script for ssh'ing into each one of the machines, called access.sh and located at the main demonstrator folder. To access any given machine, you just need to call the script passing the name of the machine as argument (name of the machine as specified in the demonstrator configuration file). For example, let's access tor1 and query the status of the IPC Processes via the IPC Manager console:

root(0)espriu3[/home/i2cat/edu/demonstrator-dc] ./access.sh tor1
./access.sh: 3: ./access.sh: source: not found
./access.sh: 6: [: tor1: unexpected operator
./access.sh: 12: [: 2257: unexpected operator
Accessing buildroot VM tor1
Warning: Permanently added '[localhost]:2257' (ECDSA) to the list of known hosts.
# socat - UNIX:/var/run/ipcm-console.sock
IPCM >>> list-ipcps
Management Agent not started

Current IPC processes (id | name | type | state | Registered applications | Port-ids of flows provided)
    1 | eth.1.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 11 | vpn1.35.IPCP-1-- | 5
    2 | eth.2.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 110 | dcfabric.35.IPCP-1-- | 1
    3 | eth.3.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 12 | vpn1.35.IPCP-1-- | 4
    4 | eth.4.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 120 | dcfabric.35.IPCP-1-- | 2
    5 | eth.5.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 13 | vpn1.35.IPCP-1-- | 3
    6 | eth.6.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 14 | vpn1.35.IPCP-1-- | 6
    7 | eth.7.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 15 | vpn3.35.IPCP-1-- | 11
    8 | eth.8.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 16 | vpn3.35.IPCP-1-- | 10
    9 | eth.9.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 17 | vpn3.35.IPCP-1-- | 9
    10 | eth.10.IPCP:1:: | shim-eth-vlan | ASSIGNED TO DIF 18 | vpn3.35.IPCP-1-- | 8
    11 | dcfabric.35.IPCP:1:: | normal-ipc | ASSIGNED TO DIF dcfabric.DIF | vpn1.35.IPCP-1--, vpn3.35.IPCP-1-- | 7, 12
    12 | vpn1.35.IPCP:1:: | normal-ipc | ASSIGNED TO DIF vpn1.DIF | - | -
    13 | vpn3.35.IPCP:1:: | normal-ipc | ASSIGNED TO DIF vpn3.DIF | - | -

IPCM >>> exit
# exit
Connection to localhost closed.
root(0)espriu3[/home/i2cat/edu/demonstrator-dc] 

To quit the machine we just need to type the exit command from the machine's command prompt.

6. Running rina-echo-time and rina-tgen

The image run by the demonstrator VMs is shipped with the rina-echo-time and rina-tgen test applications. We will start by running rina-echo-time in a couple of servers belonging to VPN1. First we start a rina-echo-time server at the server named s24.

root(0)espriu3[/home/i2cat/edu/demonstrator-dc] ./access.sh s24
./access.sh: 3: ./access.sh: source: not found
./access.sh: 6: [: s24: unexpected operator
./access.sh: 12: [: 2234: unexpected operator
Accessing buildroot VM s24
Warning: Permanently added '[localhost]:2234' (ECDSA) to the list of known hosts.
# rina-echo-time -l
7557(1478869401)#librina.logs (DBG): New log level: INFO
7557(1478869401)#librina.nl-manager (INFO): Netlink socket connected to local port 7557 
7557(1478869401)#rina-echo-time (INFO): Application registered in DIF 

Now we access the server named s13 and run the rina-echo-time application in client mode.

./access.sh: 3: ./access.sh: source: not found
./access.sh: 6: [: s13: unexpected operator
./access.sh: 12: [: 2225: unexpected operator
Accessing buildroot VM s13
Warning: Permanently added '[localhost]:2225' (ECDSA) to the list of known hosts.
# rina-echo-time -w 0 -s 1200 -c 10000
8529(1478869584)#librina.logs (DBG): New log level: INFO
8529(1478869584)#librina.nl-manager (INFO): Netlink socket connected to local port 8529 
Flow allocation time = 19.564 ms
SDU size = 1200, seq = 0, RTT = 3.0179 ms
SDU size = 1200, seq = 1, RTT = 2.7679 ms
SDU size = 1200, seq = 2, RTT = 2.6824 ms
...
SDU size = 1200, seq = 9998, RTT = 2.3279 ms
SDU size = 1200, seq = 9999, RTT = 2.326 ms
SDUs sent: 10000; SDUs received: 10000; 0% SDU loss
Minimum RTT: 2.2743 ms; Maximum RTT: 40.977 ms; Average RTT:2.3671 ms; Standard deviation: 0.55132 ms
# 

Now we will run rina-tgen in two servers belonging to VPN4. We run the rina-tgen server in server s47.

root(130)espriu3[/home/i2cat/edu/demonstrator-dc] ./access.sh s47
./access.sh: 3: ./access.sh: source: not found
./access.sh: 6: [: s47: unexpected operator
./access.sh: 12: [: 2253: unexpected operator
Accessing buildroot VM s47
Warning: Permanently added '[localhost]:2253' (ECDSA) to the list of known hosts.
#rina-tgen -l
9465(1478869804)#librina.logs (DBG): New log level: INFO
9465(1478869804)#librina.nl-manager (INFO): Netlink socket connected to local port 9465 

And now the rina-tgen client in server s36 (remember server links are rate-limited to 25 Mbps, that's about the goodput reported by rina-tgen).

root(0)espriu3[/home/i2cat/edu/demonstrator-dc] ./access.sh s36
./access.sh: 3: ./access.sh: source: not found
./access.sh: 6: [: s36: unexpected operator
./access.sh: 12: [: 2244: unexpected operator
Accessing buildroot VM s36
Warning: Permanently added '[localhost]:2244' (ECDSA) to the list of known hosts.
# rina-tgen --duration 30 -s 1450
9878(1478869876)#librina.logs (DBG): New log level: INFO
9878(1478869876)#librina.nl-manager (INFO): Netlink socket connected to local port 9878 
9878(1478869876)#traffic-generator (INFO): starting test
9878(1478869906)#traffic-generator (INFO): sent statistics:     63254 SDUs,     91718300 bytes in  29999465 us, 24.4586 Mb/s
# 

7. Tearing down the scenario

To stop the VMs and destroy any virtual interfaces and software bridges create by up.sh, we just need to execute the down.sh script from the main demonstrator folder. If we also want to clean up all the config files generated for the scenario (to be able to generate files for other ones, for example), we just need to call the clean.sh script.

Clone this wiki locally