To do today: Boost my VNF’s performance and then leave work in time to beat the traffic

To do today: Boost my VNF’s performance and then leave work in time to beat the traffic

C.Ashton

We published a couple of articles recently about the benefits of the Accelerated vSwitch (AVS) that’s integrated into Wind River Titanium Server NFV Infrastructure (NFVI) platform. Since these two posts were published, several of the Virtual Network Function (VNF) partners in our Titanium Cloud ecosystem have completed the process of validating their VNFs with Titanium Server and leveraging the performance features of AVS. In this post, we’ll draw on their experiences to describe just how quick, easy and rewarding it is to host a VNF on Titanium Server and achieve up to a 40x improvement in real-world performance.

To recap:

AVSblog1

In the first article, we explained how AVS enables service providers to achieve the level of performance that they need for NFV-based services, without having to use bypass techniques such as PCI Pass-through and Single-Root I/O Virtualization (SR-IOV). The latter approaches don’t support the infrastructure security and reliability features that represent critical requirements for telecom and cable networks.

In the second post, we discussed how the high switching performance delivered by AVS translates directly into improvements in Virtual Machine (VM) density, which in turn leads to quantifiable OPEX savings for service providers.

Now what:

When we work with a partner that has an existing VNF that they want to run on Titanium Server, their first objective is typically to do a functional test and ensure that the application functions identically on Titanium Server, compared to how it runs on another virtual switch such as Open vSwitch (OVS).  As long as the VNF uses the standard VirtIO Linux driver (and they all do), this is a quick step. AVS is fully compatible with VirtIO, so the existing VNF runs unmodified on Titanium Server. No need for any code changes, no need for any recompilation.

That first step results in a VNF that runs fine on Titanium Server, but it doesn’t deliver a performance boost. To take advantage of the performance features in AVS, an additional step is required and the details depend on the architecture of the VNF.

If the VNF uses the standard Linux kernel I/O, it’s very straightforward. Again, no changes are required to the VNF code and no recompilation is necessary. The developer simply uses the Accelerated Virtual Port (AVP) Kernel Loadable Module (KLM) that has been written specifically for AVS. This is an open-source KLM, available at Wind River’s open-source repository at no charge. There’s no need to maintain a special version of the VNF to use with AVS: the AVP KLM remains as part of the standard load and is dynamically loaded into the guest OS at runtime, just like other KLMs that may be included in the VNF. The only drivers that are actually loaded are those corresponding to the devices in the NFVI platform that is hosting the VNF.

Adding the AVP KLM to a VNF will typically deliver a performance improvement of up to 9x compared to using the VirtIO driver, depending of course on the details of the VNF itself and its actual bandwidth requirements.

If the VNF has been designed to use the Intel® DPDK library for accelerated I/O, much higher performance is possible when using AVS, simply by linking in an AVS-aware driver, which in our experience takes 15 minutes or so. In this case, the developer uses the AVS DPDK Poll Mode Driver (PMD), also available at Wind River’s open-source repository at no charge. Again, there’s no need to maintain a special version of the VNF to use with AVS: once the AVS DPDK PMD has been compiled into the VNF, it’s initialized at runtime as required when the VNF is running on an NFVI platform that is detected to be Titanium Server.

Adding the AVS DPDK PMD to a VNF will typically deliver a performance improvement of up to 40x compared to using the VirtIO driver, depending of course on the details of the VNF itself and its actual bandwidth requirements.

The AVS DPDK PMD provides another important benefit too: unlike systems which use a DPDK version of OVS, such as the discontinued OVDK, systems based on AVS do not require that the version of DPDK in the guest VNF(s) is identical to the version of DPDK in the host platform (in this case, Titanium Server). This brings significant flexibility to service providers, who now have the freedom to select from a wide range of DPDK-based VNFs without being constrained by the version of DPDK that each one uses.

AVSblog2

After working closely with several VNF partners, we have seen AVS support to be seamless, quick and high-value, based on the performance improvements that it brings. The initial bring-up / functional test step requires no change to the VNF. Up to 9x performance (vs. VirtIO) is achieved with no code changes at all, just the inclusion of the AVP KLM in the VNF’s image for dynamic loading at runtime, as needed. And a straightforward recompilation to add the AVS DPDK PMD results in up to 40x performance for DPDK-based VNFs.

By using whichever of the these two open-source drivers is applicable, our VNF partners can fully leverage the performance features of AVS, allowing them to deliver VNFs with compelling performance to service providers deploying NFV in their infrastructure.

We’re looking forward to welcoming new VNF partners into the Titanium Cloud ecosystem and collaborating with them to deliver best-in-class end-to-end solutions for both telecom and cable service providers.