skip to Main Content

Database Caching

Realizing unprecedented performance with cost efficient scalability

The Problem: OS Overhead Penalty

Kernel based networking is limited in efficiency due to large numbers of memory copies, lots of context switching, high interrupt rate, and lock contention; further aggravated by the kernel being pinned to CPU0.

The Solution: Bypass the OS and Run
Networking Services in Lightning-Fast User Space

Solarflare’s kernel bypass technology, eliminates the network bottleneck and overhead penalty of operating systems — allowing industry standard caching servers to support 100% more requests. With Cloud Onload, data centers can deploy fewer remote caching servers supporting more requests, realizing significant cost savings.

Acceleration Creates Opportunities for Fewer Caching Servers or More App Servers

IO Through OS Kernel

200K RPS per Cache Node

IO Through Kernel Bypass

800K RPS per Cache Node

Cloud Onload Increases Capacity

Support More Web/App Servers

Solarflare’s Cloud Onload Solution

Solarflare accelerates performance by:

  • Streamlining and reducing interrupts, context switches
    and data copies
  • Reducing latency by 50%
  • Increasing message rates by 100%

Seamlessly integrates into existing infrastructure

  • Binary compatible with industry standard APIs
  • No software modifications are needed
  • Standards-based solution uses TCP/IP and UDP
  • No specialized protocols needed

Available with Solarflare XtremeScale Ethernet adapters
and support

Test Results Validate Solarflare’s Cloud Onload-Enabled Database Caching Servers Handle 100% More Redis Requests

For in-memory databases, Cloud Onload provides 100% performance improvements vs. the same Ethernet adapter and a kernel-based driver on a Redis database caching server.