By Charlie Ashton
In a recent post we outlined some of the approaches we’re taking to solve OpenStack-related problems and thereby enable it to become a viable solution for VM orchestration in telco applications such as virtual CPE. This topic is also covered in a detailed white paper that you can find here.
As we talk with service providers and enterprises, the conversations regularly turn to another OpenStack concern which can impose significant cost penalties in applications such as network virtualization and data centers: the limited performance of the virtual routing feature in vanilla OpenStack distributions.
In this post we’ll briefly outline this performance problem and explain one solution that has been proven to address it.
Neutron Router Challenges
All OpenStack distributions include a virtual router (vRouter), typically implemented as a kernel function within Neutron. This vRouter enables traffic to be routed either “East-West” between Virtual Machines (VMs) within the cloud (on the same or different compute nodes) or “North-South” between VMs and external networks.
For many “simple” applications the Neutron vRouter can be used instead of either a vRouter hosted in a VM or a physical router installed as additional hardware in the rack. On the surface this looks like a good deal with apparent cost savings.
It’s worth noting at this point that the Neutron vRouter doesn’t provide all the capabilities of a typical physical appliance or its virtualized equivalent. For example, it’s a shared service with static gateway and static routing functions. There’s no support for more complex protocols like BGP or OSPF, nor for ownership by a single tenant. There will always be a need for VNF-based routers in many network virtualization use cases.
Within the limited scope of features that it does provide, the Neutron vRouter has a couple of major constraints.
First, the bandwidth of the Neutron vRouter is extremely limited: typically no more than 1% of 10G line rate.
Second, although the Neutron vRouter supports Network Address Translation (NAT) and NAT (SNAPT) albeit with a slow kernel-based implementation, it does not support Destination NAT (DNAPT) for externally-originated traffic.
These limitations mean that the Neutron vRouter is inadequate for many applications, forcing users either to instantiate a VNF-based router or to install a physical router. Either approach imposes a significant cost and power overhead.
Within Titanium Server, the Neutron vRouter is replaced by an Accelerated Virtual Router (AVR) which is implemented as part of the Accelerated Virtual Switch (you can read about the OPEX benefits delivered by the Accelerated Virtual Switch in this post).
Titanium Server‘s AVR is fully-compatible with all the relevant Neutron APIs, so any software developed to leverage these APIs will run correctly on Titanium Server with no changes needed. It will just run a lot faster. And there’s no risk of vendor lock-in.
250x Performance Improvement
Testing across multiple use cases has shown that the AVR delivers a massive 250x improvement in throughput and 9x improvement in latency, compared to the Neutron vRouter when running on identical hardware platforms. For example, on 256-byte packets AVR achieves an average throughput of 50% of 10G line rate and an average latency of 51µs, compared to less than 1% of line rate and 571 µs for the Neutron vRouter.
In terms of features, AVR provides full accelerated support for NAT and SNAPT/DNAPT, removing one of the major obstacles to the use of OpenStack-based routing.
Just like the Neutron vRouter, AVR has full support for distributed routing, enabling the vRouter to run on the same compute node as the VMs and removing the need for separate network nodes dedicated to the routing function.
Our customers have confirmed that for many telecom-oriented applications, the performance and features provided by AVR obviate the need for either a VNF-based router or a physical router, resulting in significant savings of both cost and power.
Finally, AVR is highly efficient in terms of resource utilization. As part of the Titanium Server Accelerated vSwitch, it can be configured to run on as few as one processor core (depending on the bandwidth required by the VMs), maximizing the number of cores available for running revenue-generating VMs. This post provides insights into the OPEX advantages achieved by increased VM density.
Cost-Effective Virtual Routing
In conclusion, Titanium Server’s Accelerated Virtual Router enables OpenStack-based routing to be cost-effective for a wide range of applications, while transparently retaining full compatibility with software written to leverage the vanilla Neutron vRouter in traditional OpenStack distributions and ensuring there’s no risk of vendor lock-in.