QEMU and the virtual Nexus 9k
Live ramble write up on setting up a vNexus on QEMU rather than ESXi. Thought it handy to have one local if I wanted to quickly check something on the CLI or a NXAPI interaction without either firing up a real switch or going onto the corp net and into the lab.
First small hurdle - To UEFI or not to UEFI?
The docs still tell you to specify a separate BIOS file but from what I'm reading now, UEFI is a pre-req package when installing QEMU so it should be there anyway.
Checking my packages on this 20.04 build, it's indeed installed already:
matt@dev-lnx-ldn3:~$ sudo apt list --installed | grep ovm
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
ovmf/focal-updates,focal-security,now 0~20191122.bd85bf54-2ubuntu3.3 all [installed,automatic]
The bios file is /usr/share/qemu/OVMF.fd
So I could specify -bios /usr/share/qemu/OVMF.fd if I wanted to switch it out on the CLI as I run the machine.
Could file a doc bug if I have some time next week to get the docs updated. Anyway…
Second small hurdle I found was setting up the bridges... The docs aren’t so much outdated, but depending on the OS and versions you’re using, there are newer tools out there which aren’t reflected in the docs.
Ubuntu 20.04 doesn't have ifconfig installed by default which is what the docs suggest. We could just install it (apt install ifconfig) or we could use the 'ip' command which is preferred as ifconfig is deprecated now, so let’s see if we can do that instead.
ip link set <dev name> up
So it's
ip link set mgmt_bridge up
matt@dev-lnx-ldn3:~$
matt@dev-lnx-ldn3:~$ sudo brctl addbr mgmt_bridge
matt@dev-lnx-ldn3:~$ sudo brctl addbr interconnect_br
matt@dev-lnx-ldn3:~$ sudo ifconfig mgmt_bridge up
sudo: ifconfig: command not found
matt@dev-lnx-ldn3:~$ sudo ip link set mgmt_bridge up
matt@dev-lnx-ldn3:~$ sudo ip link set interconnect_br up
matt@dev-lnx-ldn3:~$
The openVPN command in the docs is missing a dash after dev, watch out for that. The following commands set up a bunch of tunnel devices for each interface you want to rep on the switch.
sudo openvpn --mktun --dev tap_sw1_mgmt
matt@dev-lnx-ldn3:~$ sudo openvpn --mktun
Options error: You must define TUN/TAP device (--dev)
Use --help for more information.
matt@dev-lnx-ldn3:~$ sudo openvpn --mktun --dev tap_sw1_mgmt
Sat Dec 10 12:39:00 2022 TUN/TAP device tap_sw1_mgmt opened
Sat Dec 10 12:39:00 2022 Persist state set to: ON
matt@dev-lnx-ldn3:~$ sudo openvpn --mktun --dev tap_sw2_mgmt
Sat Dec 10 12:39:27 2022 TUN/TAP device tap_sw2_mgmt opened
Sat Dec 10 12:39:27 2022 Persist state set to: ON
matt@dev-lnx-ldn3:~$ sudo openvpn --mktun --dev tap_sw1_eth1_1
Sat Dec 10 12:39:39 2022 TUN/TAP device tap_sw1_eth1_1 opened
Sat Dec 10 12:39:39 2022 Persist state set to: ON
matt@dev-lnx-ldn3:~$ sudo openvpn --mktun --dev tap_sw2_eth1_1
Sat Dec 10 12:39:42 2022 TUN/TAP device tap_sw2_eth1_1 opened
Sat Dec 10 12:39:42 2022 Persist state set to: ON
matt@dev-lnx-ldn3:~$
This then sets up interfaces on that bridge interface
matt@dev-lnx-ldn3:~$ sudo brctl addif mgmt_bridge tap_sw1_mgmt
matt@dev-lnx-ldn3:~$
matt@dev-lnx-ldn3:~$ sudo brctl addif mgmt_bridge tap_sw2_mgmt
matt@dev-lnx-ldn3:~$
matt@dev-lnx-ldn3:~$ sudo brctl addif interconnect_br tap_sw1_eth1_1
matt@dev-lnx-ldn3:~$
matt@dev-lnx-ldn3:~$ sudo brctl addif interconnect_br tap_sw2_eth1_1
matt@dev-lnx-ldn3:~$
Updating the commands again with commands to supersede ifconfig when we want to set the virtual interfaces ‘up’:
matt@dev-lnx-ldn3:~$ sudo ip link set tap_sw1_mgmt up
matt@dev-lnx-ldn3:~$
matt@dev-lnx-ldn3:~$ sudo ip link set tap_sw2_mgmt up
matt@dev-lnx-ldn3:~$ sudo ip link set tap_sw1_eth1_1 up
matt@dev-lnx-ldn3:~$ sudo ip link set tap_sw2_eth1_1 up
Verify if the interfaces indeed exist and are up:
matt@dev-lnx-ldn3:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 60:45:bd:d2:62:c2 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.7/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::6245:bdff:fed2:62c2/64 scope link
valid_lft forever preferred_lft forever
3: enP20509s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP group default qlen 1000
link/ether 60:45:bd:d2:62:c2 brd ff:ff:ff:ff:ff:ff
altname enP20509p0s2
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:76:c7:d1 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:76:c7:d1 brd ff:ff:ff:ff:ff:ff
6: mgmt_bridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 2e:16:ff:5e:ba:73 brd ff:ff:ff:ff:ff:ff
inet6 fe80::8062:fcff:fe0f:a99f/64 scope link
valid_lft forever preferred_lft forever
7: interconnect_br: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 2a:af:9a:c0:47:4f brd ff:ff:ff:ff:ff:ff
inet6 fe80::5c42:46ff:fed2:c28d/64 scope link
valid_lft forever preferred_lft forever
8: tap_sw1_mgmt: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master mgmt_bridge state DOWN group default qlen 100
link/ether c6:5c:8a:b0:b9:ed brd ff:ff:ff:ff:ff:ff
9: tap_sw2_mgmt: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master mgmt_bridge state DOWN group default qlen 100
link/ether 2e:16:ff:5e:ba:73 brd ff:ff:ff:ff:ff:ff
10: tap_sw1_eth1_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master interconnect_br state DOWN group default qlen 100
link/ether 2a:af:9a:c0:47:4f brd ff:ff:ff:ff:ff:ff
11: tap_sw2_eth1_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master interconnect_br state DOWN group default qlen 100
link/ether 3e:85:1d:33:3a:09 brd ff:ff:ff:ff:ff:ff
And we can verify the bridge exists:
matt@dev-lnx-ldn3:~$ sudo brctl show
bridge name bridge id STP enabled interfaces
interconnect_br 8000.2aaf9ac0474f no tap_sw1_eth1_1
tap_sw2_eth1_1
mgmt_bridge 8000.2e16ff5eba73 no tap_sw1_mgmt
tap_sw2_mgmt
virbr0 8000.52540076c7d1 yes virbr0-nic
Now once networking on the box is set up, we can run a QEMU VM. Configure a QEMU Nexus image with 8GB RAM and 4 CPUs, set up UEFI BIOs, interfaces on a PCI bridge, management bridge and eth1 bridge, telnet server on localhost 9000 for virtual serial. This config got me to a booting state at least:
sudo qemu-system-x86_64 \
-nographic \
-smp 4 \
-m 8192 \
-enable-kvm \
-bios /usr/share/qemu/OVMF.fd \
-device i82801b11-bridge,id=dmi-pci-bridge \
-device pci-bridge,id=bridge-1,chassis_nr=1,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-2,chassis_nr=2,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-3,chassis_nr=3,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-4,chassis_nr=4,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-5,chassis_nr=5,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-6,chassis_nr=6,bus=dmi-pci-bridge \
-device pci-bridge,id=bridge-7,chassis_nr=7,bus=dmi-pci-bridge \
-netdev tap,ifname=tap_sw1_mgmt,script=no,downscript=no,id=eth1_1_0 \
-device e1000,bus=bridge-1,addr=1.0,netdev=eth1_1_0,mac=00:b0:b0:01:aa:bb,multifunction=on,romfile= \
-netdev tap,ifname=tap_sw1_eth1_1,script=no,downscript=no,id=eth1_1_1 \
-device e1000,bus=bridge-1,addr=1.1,netdev=eth1_1_1,mac=00:b0:b0:01:01:01,multifunction=on,romfile= \
-device ahci,id=ahci0 -drive file=nexus9300v64.10.2.4.M.qcow2,if=none,id=drive-sata-disk0,id=drive-sata-disk0,format=qcow2 \
-device ide-drive,bus=ahci0.0,drive=drive-sata-disk0,id=drive-sata-disk0 \
-serial telnet:localhost:9000,server,nowait -M q35
It'll boot, and complain quite a bit:
[ 2.967246] check if this has MMC
[ 2.968702] Register as MTD
mount: overlay mounted on /newroot/usr.
Installing klm_card_index
done
Linking n9k flash devices
creating flash devices BOOT_DEV= sda
Replacing inittab for N9Kv pEoR node - peorhost
INIT: version 2.88 booting
rm: cannot remove '/etc/rc.d/rcS.d/S08prefetch-image': No such file or directory
Setting up sysV scripts for host node
Installing ata_piix module ... done.
Installing kheaders module ... done.
Unsquashing rootfs ...
is_lxc: is_titan_eor:y is_stby: suffix: klm_ftrace: /isanboot/lib/modules/klm_ftrace.o
Installing ftrace in lxc 0 mode done
Not Micron SSD...
Checking all filesystems.cgroups initialized
Sat Dec 10 12:55:38 UTC 2022:: Bad Partition bootflash size 4G too small < min 8G
Trying to mount cdrom...
Checking rpmstore swap in host context.
tune2fs 1.45.6 (20-Mar-2020)
Setting reserved blocks percentage to 0% (0 blocks)
Not Micron SSD...
no existing SSD stats file found!!!
INIT: Entering runlevel: 3
Starting crond: OK
/etc/init.d/rc: line 294: [: ==: unary operator expected
cp: cannot stat '/nxos/tmp/vdc_1_sup1': No such file or directory
Stopping crond: OK
Starting crond: OK
2435491 blocks
99314 blocks
105890 blocks
/isan/vdc_1/virtual-instance/sup1/rootfs
rm: cannot remove '/isan/bin/bios_imgs/*': No such file or directory
rm: cannot remove '/isan/lib/libtahusdcli.so': No such file or directory
mv: cannot stat '/opt': No such file or directory
ENSURE THERE IS ATLEAST 1400000K FREE SPACE IN THE LOGFLASH DIR. PLEASE
REBOOT WITH AN OLDER IMAGE AND USE 'delete logflash:...' TO FREE UP SPACE.
mount: overlay mounted on /usr.
Linking n9k flash devices
creating flash devices BOOT_DEV= sda
mknod: /dev/sda7: File exists
Replacing inittab for N9Kv pEoR node - vsup
INIT: version 2.88 booting
Unsquashing rootfs ...
Total size needed in bootflash is 144060
check bootflash : OK
Total size needed in bootflash is 47548
check bootflash : OK
No TOR virtual chassis
Enabling 8250 serial driver spurious INTs workaround
Installing isan procfs isan_proc_name=isan26... done.
Directory walk started
Directory walk done - 0 packages
Temporary output repo path: /bootflash/.rpmstore/patching/patchrepo/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished
Installing SSE module with card index 21099 and vslot ID 26 ... done.
Creating SSE device node 243 ... done.
cg controllers already mounted at /var/cgroup
Inserting kernel_services module ... done.
Making kernel_services character devices
is_lxc: is_titan_eor:y is_stby:0 suffix: klm_ftrace: /isanboot/lib/modules/klm_ftrace.o
klm_ftrace already loaded. skip.
Loading I2C driver ... done.
Installing CCTRL driver for card_type 29 in vslot ID 26 on a VM without NEED_GEM ... done.
35.49: Interrupt throttling disabled. No cctrl irq detected.
Loading IGB driver ... done.
Checking all filesystems.Warning: switch is starting up with default configuration
Installing SPROM driver ... 21099 IS_N9K done.
Resetting Backplane ACT2/SPROM...Extracting rpms from image...
SUP IOFPGA Base addr 0xSUP IOFPGA Reset control reg. addr 0x8000084old data: 0 new data: 8
old data: 8 new data: 0
CORTINA-SUPInstalling pfmsvcs module ...done.
Installing nvram module ... done.
Installing if_index module with port mode 6 ... done.
Installing fcfwd
Installing RNI lcnd ... done
Installing LC netdev container_id=0 is_standby=0 ... done
Installing psdev module ... done.
Installing veobc module ... done.
Clean up previous pcap files present in tmp directory
Checking SR card
Card Index is 21099
Inserting eMMC module ...
Inserting mtdphysmap module...[ 39.127932] Error: Could not find mtd devices
Inserting OBFL module ... done.
Making OBFL character devices
mounting plog for N9k!
Sat Dec 10 12:56:02 UTC 2022::Starting Nexus 9000v initial VNIC deposit into vnicBank
Sat Dec 10 12:56:02 UTC 2022::Done with Nexus 9000v initial VNIC deposit into vnicBank
hypervisor is Standard PC (Q35 + ICH9, 2009)
Sat Dec 10 12:56:02 UTC 2022:: Bad Partition bootflash size 4G too small < min 8G
Trying to mount cdrom...
/
Appaz the disk needs to be bigger than 4GB but it continues anyway. At this point you need to switch on over to a serial console rather than the VGA console. I haven’t tried the vEthernets yet, no idea if they even work with the driver presented to QEMU. But hey you should get a CLI at least.
ALSO if you set the RAM to 4GB, you get constant complaints that your RAM is under 4GB, so maybe set to 4.1?
Comments