Two Thousand Plus Virtual NICs on a Single Physical NIC
Solarflare XtremeScale X1 family of network interface cards (NICs) internally support 2,048 virtual NICs (vNICs). Typically, these are used to accelerate network applications by mapping an application’s network flow to a specific vNIC. This dramatically improves the network performance of Virtual Machines and containers. With containers, we place an instance of Onload, our kernel bypass stacks, into the container and applications see near metal performance as if they were running on a dedicated hardware.
Solarflare NIC ASIC
2,048 Virtual instances (parallel steerable flows) supported by a single XtremeScale controller to intelligently direct packets to the correct CPU core where the container or VM is executing
By moving the communications stack into the container this enables vNIC mapping and direct NIC to container traffic routing. This facilitates the execution of hundreds or even thousands of containers on a single physical server. The limit becomes the scalability of containers as it translates to CPU cycles and available memory.
Internal L2 Switch
One of the unique characteristics of the XtremeScale family of Network Interface Cards (NICs) is the on chip Layer-2 switch. Much like a physical network switch found at the top of a data center rack this built in Layer-2 switch rapidly filters and steers packets to and from multiple physical network ports and Xtreme Packet Engines. It does this by quickly reading the ethernet packet header, then attempting to match the header to existing hardware filters. If a match exists then one or more of the appropriate corresponding action(s) will be taken: drop, pass, clone or count. For example ServerLock™ leverages filtering for both dropping or passing packets while sometimes counting them. SolarCapture also can utilize hardware filtering, but it also relies on cloning when capture is running in Sniffing mode.
Unlike NICs from competing vendors Solarflare provides 1,024 virtualized NIC instances per physical NIC port, 2,048 vNICs total per adapter. These vNIC can be matched to specific application traffic flows so they can be accelerated directly in dedicated silicon within the XtremeScale X1. Each vNIC has its own receive and transmit queue memory and it’s designed to rapidly steer packets to and from user space with the minimum amount of system overhead. This flow steering enables the XtremeScale X1 to steer some packets to DPDK, others might be accelerated directly into Onload in user space (ex. to one or more VMs), and still others may be routed to the kernel.
Receive Flow Steering
Solarflare Advanced Receive Flow Steering (SARFS) keeps track of CPU cores processing specific network flows and steers subsequent packets belonging to those flows to the same CPU core. The key here is that SARFS tracks new outbound flows by core, it then creates a filter in hardware within the XtremeScale X1 NIC. This filter is then used to steer inbound packets belonging to those flows intelligently and rapidly to the proper CPU core. SARFS is provided for servers that don’t support Linux’s Accelerated Receive Flow Steering (ARFS).
There are two production versions of Solarflare’s POSIX compliant sockets acceleration technology: Onload and ScaleOut Onload. There are three significant differences between these products: i) ScaleOut Onload does not accelerate UDP; ii) ScaleOut does not have access to the Ultra-Low Latency NIC firmware; and iii) it’s included for free with XtremeScale X1 NICs. Most client server applications today use just TCP. With ScaleOut Onload applications will see at least a 50% reduction in TCP transaction response times. New connections will also see substantial performance gains as a result of the enhanced socket reuse library. Finally, since all TCP transactions will bypass the kernel all applications will see an overall boost in performance.