New York Office
632 Broadway, Suite 803
New York, New York 10012
4380 SW Macadam Ave
Portland, Oregon 97239
Enabling advanced computing hardware.
Explore our polyhedral technology
Reservoir Labs provides our clients services and solutions powered by our knowledge of the mathematics underpinning polyhedral optimization. Our team has a record of innovating polyhedral model algorithms that solve the challenges of generating optimized mappings of complex applications to complex computing targets. We have engineered these algorithms into solutions to deliver to customers.
The polyhedral model of programs is geometric. The model represents iteration spaces of programs as high dimensional polyhedra, and dependencies within cross products of these polyhedra. This representation of programs, first articulated by Paul Feautrier in the early 1990s, improves on previous parallel program representations through its precision and also because key optimizations for high-performance and efficient execution on novel processors can be framed analytically. With an analytical framing, the search for an optimal mapping happens via efficient mathematical optimization libraries.
Reservoir’s work on polyhedral optimization began in the early 2000s in work for DARPA’s Polymorphous Computing Architecture (PCA) program, which sought to map complex signal processing algorithms to computational accelerators. Reservoir’s initial work focused on developing a practical and integrated source-to-source mapping solution. We developed both general algorithms to complete the pipeline, as well as specific solutions addressing the needs of the efficient processors being innovated in PCA.
Reservoir’s work continues today in developing new optimizations for mapping deep learning and exascale applications, and broadening the applicability of the polyhedral model. The original optimizations developed for PCA are now relevant for modern accelerators such as GPUs and deep learning hardware. Recent optimizations specialize to particular application domains (e.g., neural networks) and provide features needed for new processors, such as generation of power controls and dataflow execution models. We have also developed optimizations for improving scalability.
Reservoir’s polyhedral algorithms, patents, and code are available for license. They can be delivered in technology enabled services projects, integrating them with your compiler, or delivered as a solution for your application and architecture based on our R-Stream platform.
Reservoir has developed algorithms that improve the scalability of polyhedral scheduling. This includes engineering within classical affine scheduling to reduce the number of constraints, as well as to break the problem into smaller chunks solved independently. Reservoir has also developed approximations to reduce the dimensionality of problems. These scalability improvements help apply the polyhedral model to modern challenges such as computing on tensors and deep hierarchical architecture targets.
Reservoir’s polyhedral optimizations are designed for performance; they include parallelization optimizations that jointly optimize parallelism, locality, contiguity, vectorization and data layout (JPLCVD). They also include unique tiling optimizations for imperfect loop nests using analytic counting techniques. These optimizations can target hierarchical and hybrid machines, performing tailored optimizations at each level of hierarchy.
Reservoir has experience engineering complete solutions for the polyhedral model. This includes engineering the polyhedral model within a classical compiler feeding source code regeneration (“unparsing”). We have implemented efficient intermediate representations and special testing procedures for polyhedral optimizations. Our solutions use and ship with reliable 3rd party mathematical optimization solvers.
Reservoir has developed optimization algorithms that improve the power efficiency of compiled code, beyond improved performance. These include forms of energy proportional scheduling at the microarchitectural level, and automatic generation of voltage and clock controls.
Reservoir has extended the polyhedral model to perform optimizations beyond the original domain of affine loop nests. We have special support for optimizing machine learning/Neural Network codes and sparse tensor codes.
Reservoir has developed polyhedral optimizations targeted at special processors such as GPU, wide-SIMD processors, spatial arrays, and systolic arrays.
For more information about Reservoir products or to purchase, please