GradientGraph

Optimize Network Flows

Based on a novel algorithmic approach, GradientGraph from Reservoir Labs provides a new platform to detect, characterize, predict and resolve network bottlenecks in high-speed data networks. Enterprises who depend on the reliability and predictability of their network communication performance can rely on this cost-effective approach to system optimization.

GradientGraph from Reservoir Labs provides a new paradigm to qualitatively and quantitatively understand network bottlenecks and flows, enabling real-time traffic engineering, high-performance baselining, and a framework for high-precision network capacity planning. GradientGraph is based on a new mathematical model that captures the fundamental bottleneck structure of a network, revealing key operational insights such as the regions of influence of bottlenecks and the ripple effects of small perturbations in the network. Through the dualism between bottlenecks and flows, this also allows the identification of flows that, while not identifiable as heavy hitters in the traditional sense, lead to severe system-wide performance degradation. GradientGraph Analytics provides a cost-effective way for enterprises to optimize their network for reliable and predictable service.

Learn more about GradientGraph

Core Capabilities

GradientGraph enables network optimization for capacity, throughput, cost and Quality of Service (QoS) by modeling traffic engineering policies and predicting their effect on specific network segments as well as holistic infrastructures.

GradientGraph allows organizations to analyze and identify links in a network that are critical to the overall stability of the network, enabling a new approach to managing and ensuring network resilience.

Many organizations depend on networks that are provided by a number of owners. GradientGraph allows for unified visibility, planning and management in a heterogeneous environment, even under conditions where there is only partial network detail available.

In support of network design teams building a new network or upgrading an existing system, GradientGraph delivers detailed, application and network-specific analysis of bandwidth requirements for end-to-end system modeling of network capacity. Traditionally, capacity planning is performed by scaling based on projecting future demand for resources. GradientGraph brings a new perspective to the task of network upgrades: by allowing replay and accurate measure of the historical bottleneck structure, operators can design upgrade paths that are not only adequate for future demand, but drive toward optimized bottleneck structures, leading to more cost-effective operation.

As a consumer or a provider, an increasing number of firms require a guaranteed level of network-enabled service, typically based on service level agreements (SLA). GradientGraph ensures those business obligations can be met. Additionally, for organizations that have deadline-bound service needs with payloads that must complete within specific time parameters, GradientGraph ensures requirements are met for these flows, within the constraints of the network.

For organizations that require optimal path routing for high-priority flows, GradientGraph considers the unique network topology and identifies optimal routing for maximum throughput. Specifically for organizations with fat-tree or similar topologies, GradientGraph gives improved, cost-effective operation by reducing the number of links and providing optimal cost/benefit routing.

See it in Action

SIGCOMM'21

Noah Amsel, Research Engieer at Reservoir Labs, demonstrates the existence of bottleneck structures in communication networks that mathematically reveal the influences bottlenecks and flows exert on each other. This structure provides key insights to help identify optimal network designs as well as optimize traffic engineering. 

More Information

R-Core

High-Performance Packet Path Accelerator

Providing high-performance capture, R-Core is the ultimate packet forwarding engine. R-Core supports DPDK and leverages its powerful HW acceleration capabilities from multiple application instances, including Zeek and Suricata workers. R-Core combines lightning-fast performance features such as zero packet copy, proprietary queuing and lockless data structures, and kernel bypass with options for fine-grained control of NUMA affinity and CPU core pinning. Although application-specific performance varies, benchmarks demonstrate that at input rates of 10Gbps, R-Core’s optimizations increase application performance up to 500% while packet drops are reduced up to 200%.

Core Capabilities

Tailor resources to applications. Flexibly adjust the amount of compute and memory allocated to ensure the application performance requirements are satisfied while avoiding wasting resources.

Support for HW-accelerated filtering to allow further speedups by offloading traffic from the CPU. This feature includes support for SW-fallback.

Maximize performance by ensuring all memory accesses are performed on the local NUMA node.

Avoid slowdowns from memory contention by ensuring that no shared data structure needs to be locked.

Increase the number of application workers while avoiding bottlenecks across the pipeline.

Leverage hardware-aided packet acceleration and utilize state-of-the-art NICs.

More Information

Meet our Team

Jordi Ros-Giralt

Fellow & Managing Engineer
Bio

Sruthi Yellamraju

Principal Software Engineer
Bio

Jim Ezick

VP Engineering
Bio

Alison Ryan

VP Business Development
Bio

Noah Amsel

Research Engineer
Bio

Brendan von Hofe

Research Engineer
Bio

Get in touch with one of our experts today

The Latest

Recent Publications

Systems and methods for multiresolution priority queues

A system for storing and extracting elements according to their priority takes into account not only the priorities of the elements but also three additional parameters, namely, a priority resolution pΔ and two priority limits pmin and pmax. By allowing an ordering error if the difference in the priorities of elements are

Read More »

Designing Data Center Networks Using Bottleneck Structures

This paper provides a mathematical model of data center performance based on the recently introduced Quantitative Theory of Bottleneck Structures (QTBS). Using the model, we prove that if the traffic pattern is interference-free, there exists a unique optimal design that both minimizes maximum flow completion time and yields maximal system-wide

Read More »

A Quantitative Theory of Bottleneck Structures for Data Networks

The conventional view of the congestion control problem in data networks is based on the principle that a flow’s performance is uniquely determined by the state of its bottleneck link, regardless of the topological properties of the network. However, recent work has shown that the behavior of congestion-controlled networks is

Read More »