June 8, 2020
In this paper, we introduce the Theory of Bottleneck Ordering, a
mathematical framework that reveals the bottleneck structure of
data networks. This theoretical framework provides insights into
the inherent topological properties of a network at least in three
areas: (1) It identifies the regions of influence of each bottleneck; (2)
it reveals the order in which bottlenecks (and flows traversing them)
converge to their steady state transmission rates in distributed
congestion control algorithms; and (3) it provides key insights to
the design of optimized traffic engineering policies. We demonstrate
the efficacy of the proposed theory in TCP congestion-controlled
networks for two broad classes of algorithms: congestion-based
algorithms (BBR) and loss-based additive-increase/multiplicative-decrease
algorithms (Cubic and Reno). Among other results, our
network experiments show that: (1) Qualitatively, both classes of
congestion control algorithms behave as predicted by the bottleneck
structure of the network; (2) flows compete for bandwidth only
with other flows operating at the same bottleneck level; (3) BBR
flows achieve higher performance and fairness than Cubic and Reno
flows due to their ability to operate at the right bottleneck level;
(4) the bottleneck structure of a network is ever-changing and its
levels can be folded due to variations in the flows’ round trip times;
and (5) against conventional wisdom, low-hitter flows can have a
large impact to the overall performance of a network.
Google Scholar • Article
November 17, 2019
Congestion control algorithms for data networks have been the subject of intense research for the last three decades. While most of the work has focused around the characterization of a flow’s bottleneck link, understanding the interactions amongst links and the ripple effects that perturbations in a link can cause on the rest of the network has remained much less understood. The Theory of Bottleneck Ordering is a recently developed mathematical framework that reveals the bottleneck structure of a network and provides a model to understand such effects. In this paper we present G2, the first operational network optimization framework that utilizes this new theoretical framework to characterize with high-precision the performance of bottlenecks and flows. G2 generates an interactive graph structure that describes how perturbations in links and flows propagate, providing operators new optimization insights and traffic engineering recommendations to help improve network performance. We provide a description of the G2 implementation and a set of experiments using real TCP/IP code to demonstrate its operational efficacy.This paper has not yet been published. If you would like to receive a copy once available, please contact us or check back here after the 2019 SuperComputing Conference. For more information on G2 technology, visit https://www.reservoir.com/research/tech/networking/.
June 4, 2019
A multiresolution parser (MRP) can selectively extract one or more information units from a dataset based on the available processing capacity and/or the arrival rate of the dataset. Should any of these parameters change, the MRP can adaptively change the information units to be extracted such that the benefit or value of the extracted information is maximized while minimizing the cost of extraction. This tradeoff is facilitated, at least in part, by an analysis of the spectral energy of the datasets expected to be processed by the MRP. The MRP can also determine its state after a processing iteration and use that state information in subsequent iterations to minimize the required computations in such subsequent iterations, so as to improve processing efficiency.
Google Scholar • Article
May 23, 2019
Positive linear programming (PLP), also known as packing and covering linear programs, is an important class of problems frequently found in fields such as network science, operations research, or economics. In this work we demonstrate that all PLP problems can be represented using a network structure, revealing new key insights that lead to new polynomial-time algorithms.
Google Scholar • Article
April 8, 2019
Full packet capture (FPC) consists in capturing all packets and storing them into permanent storage to enable offline forensic analysis. FPC however suffers from a scalability issue: at today's normal traffic speed rates of 10Gbps or above, it either becomes intractable or requires highly expensive hardware both in processing and storage, which rapidly decreases the economic viability of the technology.
The first good news is that for many practical cases, full packet capture is not necessary. This rationale stems from the well-known law of heavy tailed traffic: from an analysis standpoint, most of the interesting features found in network traffic—such as a network attack, although not limited to it—are found in a very small fraction of it. Further, in some cases full packet capture is not only unnecessary but could represent a liability as sensitive information is kept in non-ephemeral storage. The second good news is that all the heavy lifting done by Zeek in processing network traffic can be leveraged to overcome both the intractability and the liability problems. Indeed, Zeek can be brought into the loop to perform selective packet capture (SPC), a process by which the Zeek workers themselves decide which traffic must be stored into disk in a selective and fine granular manner.
In this talk Reservoir Labs will present a workflow to perform selective packet capture using the Zeek sensor at very high speed rates. The workflow allows Zeek scripts to directly trigger packet captures based on the real time analysis of the traffic itself. We will describe key data structures needed to efficiently perform this task and introduce several Zeek scripts and use cases illustrating how SPC can be used to capture just the necessary packets to enable meaningful forensic analysis while minimizing the exposure to the liability risk.
Contact us to receive a copy of this presentation or for a demonstration.