Replies: 1 comment
-
oh, did I miss that? It seems the issue has been addressed already... |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Ok the issue described below is now fixed:
Found the issue why the line card was failing to boot. It it is multicore (smp) line card and requires at least 2 vcpus to boot.
Edited the vrnetlab.py file and added following, -cpu host and the important one -smp 2.
Hi Roman,
Nice work on getting support for the recent version of the Nokia VSIMs within vrnetlab.
I taken the latest copy (your fork) of the vrnetlab and as shown below I have tried to add the Nokia 7250 IXR-e variant.
While the 7250 IXR-e is a fixed form factor node but the VSIM requires two VMs, i.e. distributed mode of operation.
I have this working on eve-ng using a CPM and IMM based VMs.
Based on the changes below the VMs boot with docker but I get following message in the CPM VM:
As you can see the card has failed.
I am not sure why this is when what i can configured is same as what i have done in eve-ng.
I can see that in the launch.py it provides the option for setting timos specific options, but where do pass additional flags for qemu. the reason I asked about this is when i look at my eve-ng, i can see following flags for qemu:
-cpu host
as per following in eve-ng which works fine:IXR-e CPM in eve-ng:
-machine type=pc,accel=kvm -enable-kvm -uuid <uuid value> -serial mon:stdio -nographic -nodefaults -rtc base=utc -cpu host -smbios type=1,product="TIMOS:slot=A chassis=IXR-e card=cpm-ixr-e"
IXR-e IOM in eve-ng:
-machine type=pc,accel=kvm -enable-kvm -uuid <uuid value> -serial mon:stdio -nographic -nodefaults -rtc base=utc -cpu host -smbios type=1,product="TIMOS:slot=1 chassis=IXR-e card=imm24-sfp++8-sfp28+2-qsfp28 mda/1=m24-sfp++8-sfp28+2-qsfp28"
Can you clarify one thing for me please, does the VMs run inside the docker container, because on my host I see four processes of qemu-system running when I started the 7250 IXR-e containers using docker, so I wondering where is the VMs running, on the host or within the docker container?
Card fails in the vrnetlab version of booting IXR-e.
Here are the changes I made to the lanuch.py file and recreated the docker image.
choices=["sr-1", "sr-1e", "ixr-e"],
Beta Was this translation helpful? Give feedback.
All reactions