Overview
The Data Plane Development Kit (DPDK) provides high-performance packet processing libraries and user space drivers. Starting with Open vSwitch (OVS) version 2.4 (http://openvswitch.org/releases/NEWS-2.4.0), we have an option to use a DPDK-optimized vHost path in OVS. DPDK support has been available in OVS since version 2.2.
Using DPDK with OVS gives us tremendous performance benefits. Similar to other DPDK-based applications, we see a huge increase in network packet throughput and much lower latencies.
Several performance hot-spot areas inside OVS were also optimized using the DPDK packet processing libraries. For example, the forwarding plane has been optimized to run in the user space as separate threads of the vswitch daemon (vswitchd). Implementation of DPDK-optimized vHost guest interface(s) allows for high-performance VM-to-VM or PHY-VM-PHY type use cases.
In this document, we will show step-by-step how to configure OVS with DPDK for inter-VM application use cases. Specifically, we will create an OVS vSwitch bridge with two DPDK vhost-user ports. Each port will be hooked up to a separate VM. We will then run a simple iperf3 throughput test to determine the performance. We will compare the performance with that of a non-DPDK OVS configuration, so we can see how much improvement OVS with DPDK gives us.
Open vSwitch can be installed via the standard package installers on common Linux* distributions. But because DPDK support is not enabled by default, we need to build Open vSwitch with DPDK before we can proceed.
The detailed steps for installing and using OVS with DPDK can be found at https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md. In this document we will cover the basic steps and specifically the DPDK vhost-user use case.
Requirements for OVS and DPDK
Before compiling DPDK or OVS, make sure you have all the requirements satisfied:
http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#compilation-of-the-dpdk
The development tool packages in standard Linux distributions usually satisfy most of these requirements.
For example, on yum-based (or dnf-based) distributions, you can use the following install command:
yum install "@Development Tools" automake tunctl kernel-tools "@Virtualization Platform""@Virtualization" pciutils hwloc numactl
Also, ensure the qemu version on the system is v2.2.0 or above as discussed under “DPDK vhost-user Prerequisites“ in https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md
Building the DPDK Target for OVS
To build OVS with DPDK, we need to download the DPDK source code and prepare its target environment. For more detailed information on DPDK usage, please refer to http://www.dpdk.org/doc/guides/linux_gsg/index.html. The following code snippet shows the basic steps:
curl -O http://dpdk.org/browse/dpdk/snapshot/dpdk-2.1.0.tar.gz tar -xvzf dpdk-2.1.0.tar.gz cd dpdk-2.1.0 export DPDK_DIR=`pwd` sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i config/common_linuxapp make install T=x86_64-ivshmem-linuxapp-gcc cd x86_64-ivshmem-linuxapp-gcc EXTRA_CFLAGS="-g -Ofast" make -j10
Building OVS with DPDK
With the DPDK target environment built, we now can download the latest OVS sources and build it with DPDK support enabled. The standard documentation for OVS with DPDK build is https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md. Here we will cover the basic steps.
git clone https://github.com/openvswitch/ovs.git cd ovs export OVS_DIR=`pwd` ./boot.sh ./configure --with-dpdk="$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/" CFLAGS="-g -Ofast" make 'CFLAGS=-g -Ofast -march=native' -j10
We now have full OVS built with DPDK support enabled. All the standard OVS utilities can be found under $OVS_DIR/utilities/, and OVS DB under $OVS_DIR/ovsdb/. We will use the utilities under these locations for our next step(s).
Create OVS DB and Start ovsdb-server
Before we can start main the OVS daemon “ovs-vswitchd”, we need to initialize the OVS DB and start ovsdb-server. The following commands show how to clear/create a new OVS DB and ovsdb_server instance.
pkill -9 ovs rm -rf /usr/local/var/run/openvswitch rm -rf /usr/local/etc/openvswitch/ rm -f /usr/local/etc/openvswitch/conf.db mkdir -p /usr/local/etc/openvswitch mkdir -p /usr/local/var/run/openvswitch cd $OVS_DIR ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db ./vswitchd/vswitch.ovsschema ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ./utilities/ovs-vsctl --no-wait init
Configuring the Host and NICs for OVS DPDK Usage
DPDK requires the host system to support hugepages, and the NIC(s) need to be enabled with user-space DPDK poll-mode drivers (PMD).
To enable hugepages and use the VFIO user space driver, append the parameters shown below to GRUB_CMDLINE_LINUX in /etc/default/grub, and then run grub update and reboot the system:
default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=1-13,15-27 grub2-mkconfig -o /boot/grub2/grub.cfg reboot
Depending on the available memory in your system, the number and type of hugepages can be adjusted. The isolcpus
parameter allows us to isolate certain CPUs from the Linux scheduler, so DPDK-based applications can “pin” to them.
Once the system is rebooted, check the kernel cmdline and allocated hugepages as shown below.
The next step is to mount the hugepages file system and load the vfio-pci
user space driver.
mkdir -p /mnt/huge mkdir -p /mnt/huge_2mb mount -t hugetlbfs hugetlbfs /mnt/huge mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB modprobe vfio-pci cp $DPDK_DIR/tools/dpdk_nic_bind.py /usr/bin/. dpdk_nic_bind.py --status dpdk_nic_bind.py --bind=vfio-pci 05:00.1
The following screenshot shows sample output using the commands given above.
If the intended use case is VM-to-VM only and no physical NIC is used, we can skip the NIC vfio-pci steps above.
Starting the ovs-vswitchd
We have the OVS DB configured and the host set up for OVS DPDK usage. The next step is to start the main ovs-vswitchd process.
modprobe openvswitch $OVS_DIR/vswitchd/ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 2048 -- unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
Creating a bridge and DPDK vhost-user ports for inter-VM use case.
For our sample test case, we will create a bridge and add two DPDK vhost-user ports. Optionally we can add the vfio-pci physical NIC we configured earlier.
$OVS_DIR/utilities/ovs-vsctl show $OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev $OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
The following screenshot shows the final OVS configuration.
Using DPDK vhost-user Ports with VMs
Creating VMs is out of the scope of this document. Once we have two VMs created (for example, f21vm1.qcow2 and f21vm2.qcow2), the following commands show how to use the DPDK vhost-user ports we created earlier.
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \ -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \ -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \ -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user2 \ -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc
Simple DPDK vhost-user inter-VM Performance with iperf3
Log in to the VMs and configure the NICs with a static IP on the same subnet. Install iperf3
and then run a simple network test.
On one VM, start iperf3
in server mode iperf3 -s
and run a client iperf3
. The following screenshot shows the sample result.
Replicating the Performance Test with Standard OVS (No DPDK)
In previous sections we created and used the OVS-DPDK build in the $OVS_DIR folder itself; we did not install it on the system. For replicating the test case with a standard OVS (non DPDK), we can simply install from standard distribution installers. For example, on yum-based (or dnf-based) systems, we could install as:
pkill -9 ovs yum install openvswitch rm -f /etc/openvswitch/conf.db mkdir -p /var/run/openvswitch ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema ovsdb-server --remote=punix:/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ovs-vsctl --no-wait init ovs-vswitchd unix:/var/run/openvswitch/db.sock --pidfile --detach ovs-vsctl add-br br0 ovs-vsctl show
At this point, we have a fresh OVS DB configured and a non-DPDK ovs-vswitchd process started.
To configure two VMs with tap devices on non-DPDK OVS bridge (br0), refer to the instructions in http://openvswitch.org/support/dist-docs-2.4/INSTALL.KVM.md.txt. Then start the VMs using the same images we used previously, for example:
qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:22:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:23:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown
Repeat the simple iperf3
performance test we did earlier. Below is a sample output; your results may vary depending on your system configuration.
As seen above, we notice a significant performance improvement with OVS DPDK. Both the performance tests were performed on the same system, the only difference is using standard-OVS versus OVS-with-DPDK.
Summary
The Open vSwitch 2.4 release enables DPDK support, bringing tremendous performance benefits. In this article, we showed how to build and use OVS with DPDK. We covered how to configure a simple OVS bridge with DPDK vhost-user ports for an inter-VM application use case. We demonstrated performance improvements using the iperf3 benchmark, comparing OVS with DPDK and without DPDK.
About the Author
Ashok Emani is a Senior Software Engineer at Intel Corporation with over 14 years of work experience spanning Embedded/Systems programming, Storage/IO technologies, Computer architecture, Virtualization and Performance analysis/benchmarking. He currently works on SDN/NFV enabling projects.