The polyhedral model of programs is geometric. The model represents iteration spaces of programs as high dimensional polyhedra, and dependencies within cross products of these polyhedra. This representation of programs, first articulated by Paul Feautrier in the early 1990s, improves on previous parallel program representations through its precision and also because key optimizations for high-performance and efficient execution on novel processors can be framed analytically. With an analytical framing, the search for an optimal mapping happens via efficient mathematical optimization libraries.
Reservoir’s work on polyhedral optimization began in the early 2000s in work for DARPA’s Polymorphous Computing Architecture (PCA) program, which sought to map complex signal processing algorithms to computational accelerators. Reservoir’s initial work focused on developing a practical and integrated source-to-source mapping solution. We developed both general algorithms to complete the pipeline, as well as specific solutions addressing the needs of the efficient processors being innovated in PCA.
Reservoir’s work continues today in developing new optimizations for mapping deep learning and exascale applications, and broadening the applicability of the polyhedral model. The original optimizations developed for PCA are now relevant for modern accelerators such as GPUs and deep learning hardware. Recent optimizations specialize to particular application domains (e.g., neural networks) and provide features needed for new processors, such as generation of power controls and dataflow execution models. We have also developed optimizations for improving scalability.
Reservoir’s polyhedral algorithms, patents, and code are available for license. They can be delivered in technology enabled services projects, integrating them with your compiler, or delivered as a solution for your application and architecture based on our R-Stream platform.