Final Report on the R-Stream 3.0 Compiler



Publication Source: DTIC AFRL-RI-RS-TR-2008-160, Reservoir Labs, Inc.

This report describes Reservoir Lab’s R-Stream compiler and mapper component developed in DARPA’s Polymorphous Computing Architecture (PCA) project. Mapping in this context is the process of transforming sequential programs into efficient parallel code to be executed on PCAs, which are a new generation of high-performance architectures with very high potential computational efficiency (FLOPS/W) yet which are programmable.

Automatic mapping is a critical technology for achieving the potential of these architectures, because these chips simultaneously achieve high FLOPS/W and programmability by otherwise compromising on the complexity of the programming task. The goal of automatic mapping is to make programmers productive: to broaden the set of programmers who can achieve the potential of the architectures beyond a few gurus who hand-code the application or limited libraries, and to make even more complex variants of the architectures usable by the gurus.


Google Scholar    

Compilers for Embedded Systems



Publication Source: The Embedded Systems Week, Grenoble, France, October, 2009

This short presentation outlines the design of the R-Stream compiler.  See the presentation on ResearchGate  here.
Article

Extended Static Control Programs as a Programming Model for Accelerators; A Case Study: Targeting Clear Speed CSX700 with the R-Stream Compiler



Publication Source: The Workshop on Programming Models for Emerging Architectures (PMEA), Raleigh, NC, USA, 2009

Classical compiler technologies are failing to bridge the gap between high performance and productivity in the multicore and massively parallel architecture era. This results in two complementary trends. Firstly, the rise of many new languages to allow the user to conveniently express parallelism. However, those languages may be target-specific like, e.g., CUDA or Cn, and they leave to the programmer the complex task of extracting parallelism even if expressing it may be more convenient using, e.g., Chapel or X10. Secondly, new high-level abstractions like SPIRAL allow performance portability at the price of versatility. In this paper we discuss an alternative way. We consider a restricted class of classical programs known as Extended Static Control Programs as a programming model. We show that using simple rules while writing sequential programs with usual languages like C, our R-Stream R High-Level Compiler can achieve complex mappings to parallel architectures in a fully automatic way. We present our effort to target ClearSpeed CSX700 architecture with specific contributions to avoid costly control overhead.
Google Scholar    Article

Productivity via Automatic Code Generation for PGAS Platforms with the R-Stream Compiler



Publication Source: The Workshop on Asynchrony in the PGAS Programming Model (APGAS), Yorktown Heights, NY, 2009

Emerging computing architectures present concurrent, heterogeneous, and hierarchical organizations. Explicit management of distributed memories, bulk communications, and the careful scheduling of data and computation for locality of reference appear to be necessary to achieve high efficiencies relative to the peak performance. In some cases, the architectures present mixed execution models. We present the design of a software mapping tool, the R-Stream R High-Level Compiler, which permits a simplified programming model in terms of abstracted, programmer-friendly expressions of algorithms by providing an automatic procedure for producing a mapping that conforms to the requirements for the emerging architectures.
Google Scholar    Article

Towards a Complete, Multi-level Cognitive Architecture



Publication Source: 8th International Conference on Cognitive Modeling, Ann Arbor, Michigan, United States, July 2007

The paper describes a novel approach to cognitive architecture exploration in which multiple cognitive architectures are integrated in their entirety. The goal is to increase significantly the application breadth and utility of cognitive architectures generally. The resulting architecture favors a breadth-first rather than depth-first approach to cognitive modeling by focusing on matching the broad power of human cognition rather than any specific data set. It uses human cognition as a functional blueprint for meeting the requirements for general intelligence. For example, a chief design principle is inspired by the power of human perception and memory to reduce the effective complexity of problem solving. Such complexity reduction is reflected in an emphasis on integrating subsymbolic and statistical mechanisms with symbolic ones. The architecture realizes a “cognitive pyramid” in which the scale and complexity of a problem is successively reduced via three computational layers: Proto-cognition (information filtering and clustering), Micro-cognition (memory retrieval modulated by expertise) and Macro-cognition (knowledge-based reasoning). The consequence of this design is that knowledge-based reasoning is used primarily for non-routine, novel situations; more familiar situations are handled by experience-based memory retrieval. Filtering and clustering improve overall scalability by reducing the elements to be considered by higher levels. The paper describes the design of the architecture, two prototype explorations, and evaluation and limitations.
Google Scholar    Article

1 20 21 22 23 24