## Combinatorial Multigrid: Advanced Preconditioners For Ill-Conditioned Linear Systems

September 24, 2019

The Combinatorial Multigrid (CMG) technique is a
practical and adaptable solver and combinatorial preconditioner
for solving certain classes of large, sparse systems of linear equations.
CMG is similar to Algebraic Multigrid (AMG) but replaces
large groupings of fine-level variables with a single coarse-level
one, resulting in simple and fast interpolation schemes. These
schemes further provide control over the refinement strategies at
different levels of the solver hierarchy depending on the condition
number of the system being solved [1]. While many pre-existing
solvers may be able to solve large, sparse systems with relatively
low complexity, inversion may require O(n2) space; whereas, if
we know that a linear operator has ~n = O(n) nonzero elements,
we desire to use O(n) space in order to reduce communication
as much as possible.
Being able to invert sparse linear systems of equations, asymptotically
as fast as the values can be read from memory, has been
identified by the Defense Advanced Research Projects Agency
(DARPA) and the Department of Energy (DOE) as increasingly
necessary for scalable solvers and energy-efficient algorithms [2],
[3] in scientific computing. Further, as industry and government
agencies move towards exascale, fast solvers and communicationavoidance
will be more necessary [4], [5]. In this paper, we present
an optimized implementation of the Combinatorial Multigrid in
C using Petsc and analyze the solution of various systems using
the CMG approach as a preconditioner on much larger problems
than have been presented thus far. We compare the number of
iterations, setup times and solution times against other popular
preconditioners for such systems, including Incomplete Cholesky
and a Multigrid approach in Petsc against common problems,
further exhibiting superior performance by the CMG. 1 2
Index Terms—combinatorial algorithms, spectral support
solver, linear systems, fast solvers, preconditioners, multigrid,
graph laplacian, benchmarking, iterative solvers

Google Scholar • Article

## Systems and Methods for Solving Unrestricted Incremental Constraint Problems

September 3, 2019

We present the architecture of a high-performance constraint solver R-Solve that extends the gains made in SAT performance over the past fifteen years on static decision problems to problems that require on-the-fly adaptation, solution space exploration and optimization. R-Solve facilitates collaborative parallel solving and provides an efficient system for unrestricted incremental solving via Smart Repair. R-Solve can address problems in dynamic planning and constrained optimization involving complex logical and arithmetic constraints.

Google Scholar • Article

## Polyhedral Tensor Schedulers

July 15, 2019

Compiler optimizations based on the polyhedral
model are able to automatically parallelize and optimize loopbased
code. We acknowledge that while polyhedral techniques
can represent a broad set of program transformations, important
classes of programs could be parallelized just as well using less
general but more tractable techniques. We apply this general idea
to the polyhedral scheduling phase, which is one of the typical
performance bottlenecks of polyhedral compilation.
We focus on a class of programs in which enough parallelism is
already exposed in the source program, and which includes Deep
Learning layers and combinations thereof, as well as multilinear
algebra kernels. We call these programs ”tensor codes”, and
consequently call ”tensor schedulers” the tractable polyhedral
scheduling techniques presented here.
The general idea is that we can significantly speed up
polyhedral scheduling by restricting the set of transformations
considered. As an extra benefit, having a small search space
allows us to introduce non-linear cost models, which fills a gap
in polyhedral cost models.

## Systems and Methods for Multiresolution Parsing

June 4, 2019

A multiresolution parser (MRP) can selectively extract one or more information units from a dataset based on the available processing capacity and/or the arrival rate of the dataset. Should any of these parameters change, the MRP can adaptively change the information units to be extracted such that the benefit or value of the extracted information is maximized while minimizing the cost of extraction. This tradeoff is facilitated, at least in part, by an analysis of the spectral energy of the datasets expected to be processed by the MRP. The MRP can also determine its state after a processing iteration and use that state information in subsequent iterations to minimize the required computations in such subsequent iterations, so as to improve processing efficiency.

Google Scholar • Article

## On the Bottleneck Structure of Positive Linear Programming

May 23, 2019

Positive linear programming (PLP), also known as packing and covering linear programs, is an important class of problems frequently found in fields such as network science, operations research, or economics. In this work we demonstrate that all PLP problems can be represented using a network structure, revealing new key insights that lead to new polynomial-time algorithms.

Google Scholar • Article