Boost your VNF’s performance by 30x without lifting a finger (and 40x with just a little effort)

Boost your VNF’s performance by 30x without lifting a finger (and 40x with just a little effort)

C.Ashton

As service providers move beyond initial trials of Network Functions Virtualization (NFV) and start planning actual deployments of virtualized applications, the economics of this transition come under increasing scrutiny. After all, why would you take the risk of deploying new, risky technology unless the Return on Investment (RoI) is both significant and quantifiable?

One of the major goals of NFV is to reduce Operating Expenses (OPEX) and one specific element of the NFV architecture that has a major effect on OPEX is the virtual switch, or vSwitch. As part of the NFV infrastructure platform, the vSwitch is responsible for switching network traffic between the physical world (the core network) and the virtual world (the Virtual Network Functions or VNFs).

Because the vSwitch runs on the same server platform as the VNFs, processor cores that are required for running the vSwitch are not available for running VNFs and this can have a significant effect on the number of subscribers that can be supported on a single server blade. This in turn impacts the overall operational cost-per-subscriber and has a major influence on the OPEX improvements that can be achieved by a move to NFV.

ti cloud

In this post we’ll explain how an important new feature in the latest version of the Wind River Titanium Cloud virtualization platform enables VNF suppliers to accelerate their packet throughput by 30x without changing a single line of code, thanks to the Accelerated vSwitch (AVS) that’s an integral part of Titanium Cloud. They can also boost the performance of VNFs based on the Intel® DPDK library significantly more with just a simple recompilation.

Within the Titanium Cloud ecosystem, we have almost 30 partners who provide VNFs. For most of these companies, the performance of their VNF is a key element not only in how they differentiate themselves from their competitors but also in how they help their service provider customers quantify the business advantages of migrating from physical network appliances to virtualized applications.

vhost1

When we work with a partner that has an existing VNF that they want to run on Titanium Cloud, their first objective is typically to do a functional test and ensure that the application functions identically on Titanium Cloud, compared to how it runs on another virtual switch such as Open vSwitch (OVS).  As long as the VNF uses the standard VirtIO Linux driver (and they all do), this is a quick step. AVS is fully compatible with VirtIO, so the existing VNF runs unmodified on Titanium Cloud. No need for any code changes, no need for any recompilation.

That first step results in a VNF that runs fine on Titanium Cloud, but it doesn’t deliver a performance boost. To take advantage of the performance features in AVS, there are a couple of options available to our partners.

As the first option, the latest version of Titanium Cloud (see this post for details) includes full support for the vhost DPDK / user-level backend for Virtio networking. vhost reduces virtualization overhead by moving Virtio packet processing tasks out of the qemu process and sending them directly to the DPDK-accelerated vSwitch, via the vhost-user driver. This results in reduced latency and better performance than VirtIO. To take advantage of the vhost support in Titanium Cloud, the VNF supplier only needs to make sure that their VNF is running on the latest version of Titanium Cloud. No changes required to the VNF itself.

When running VNFs on a platform based on OVS, enabling a VirtIO vhost back end in the host will typically deliver a performance improvement of up to 15x compared to a baseline VirtIO kernel implementation. That’s a nice boost in performance, but using vhost in a VNF running on Titanium Cloud will typically double that performance, resulting in a performance improvement of up to 30x compared to using VirtIO kernel interfaces with OVS, depending of course on the details of the VNF and its actual bandwidth requirements.

The second option applies if the VNF has been designed to use DPDK. In this case, much higher performance is possible when using Titanium Cloud, simply by linking in an open-source AVS-aware driver, which in our experience takes 15 minutes or so. The AVS DPDK Poll Mode Driver (PMD) is available free of charge at Wind River’s open-source repository, hosted on Github. Just as with the vhost scenario, there’s no need to maintain a special version of the VNF to use with AVS: once the AVS DPDK PMD has been compiled into the VNF, it’s initialized at runtime as needed whenever the VNF is running on a virtualization platform that is detected to be Titanium Cloud.

Adding the AVS DPDK PMD to a VNF will typically deliver a performance improvement of up to 40x compared to using VirtIO kernel interfaces with OVS, depending of course on the details of the VNF itself and its actual bandwidth requirements.

vhost2

After working closely with many VNF partners, we have seen AVS support to be seamless, quick and high-value, based on the performance improvements that it brings. The initial bring-up / functional test step requires no change to the VNF. Up to 30x performance (vs. VirtIO kernel on OVS) is achieved with no code changes at all through using the standard Virtio interface.

For DPDK-based VNFs, a straightforward recompilation to add the AVS DPDK PMD results in up to a 40x performance improvement compared to a configuration using VirtIO kernel interfaces.

By using whichever of the these two open-source drivers is applicable, our VNF partners can fully leverage the performance features of AVS, allowing them to deliver VNFs with compelling performance to service providers deploying NFV in their infrastructure.

We’re looking forward to welcoming new VNF partners into the Titanium Cloud ecosystem and collaborating with them to deliver the industry-leading VNF performance that helps service providers maximize the OPEX savings from deploying NFV.