Towards a Complete, Multi-level Cognitive Architecture



Publication Source: 8th International Conference on Cognitive Modeling, Ann Arbor, Michigan, United States, July 2007

The paper describes a novel approach to cognitive architecture exploration in which multiple cognitive architectures are integrated in their entirety. The goal is to increase significantly the application breadth and utility of cognitive architectures generally. The resulting architecture favors a breadth-first rather than depth-first approach to cognitive modeling by focusing on matching the broad power of human cognition rather than any specific data set. It uses human cognition as a functional blueprint for meeting the requirements for general intelligence. For example, a chief design principle is inspired by the power of human perception and memory to reduce the effective complexity of problem solving. Such complexity reduction is reflected in an emphasis on integrating subsymbolic and statistical mechanisms with symbolic ones. The architecture realizes a “cognitive pyramid” in which the scale and complexity of a problem is successively reduced via three computational layers: Proto-cognition (information filtering and clustering), Micro-cognition (memory retrieval modulated by expertise) and Macro-cognition (knowledge-based reasoning). The consequence of this design is that knowledge-based reasoning is used primarily for non-routine, novel situations; more familiar situations are handled by experience-based memory retrieval. Filtering and clustering improve overall scalability by reducing the elements to be considered by higher levels. The paper describes the design of the architecture, two prototype explorations, and evaluation and limitations.
Google Scholar    Article

Evaluation of Stream Virtual Machine on Raw Processor



Publication Source: The 21st IPDPS, Long Beach, CA, USA, 2007

Stream processing exploits the properties of stream applications such as parallelism and throughput-oriented nature of the applications. One of the most recent approaches is community-supported Morphware stable interface (MSI) used as a stable abstraction between high-level compilers (HLC) and low-level architecture-specific compilers (LLC). We focus on one part of the MSI, the stream virtual machine (SVM). We implemented a high-level compiler that produces SVM output renderings and SVM implementation. The SVM is implemented with the Raw compiler as the LLC and an accompanying library. We also implemented stream applications such as matrix multiplication, FIR bank, and ground moving target indicator (GMTI) using the implemented compilers. These applications are optimized and the results are analyzed. The results show that the SVM framework is generally suitable for streaming applications on Raw processor.
Google Scholar    Article

Poster Reception-Alef Parallel SAT Solver for HPC Hardware



Publication Source: The ACM/IEEE SC Conference on High Performance Networking and Computing, Tampa, FL, USA, 2006

Solvers for the Boolean satisfiability problem (SAT) are an enabling technology for a diverse set of applications, including formal verification of both hardware and software, mathematics, and planning. However, solver performance, measured in terms of speed and maximum problem size, is a limiting factor to the application of SAT to real-world problems. We are developing a parallel SAT solver, Alef, to take advantage of HPC hardware.The Alef parallel SAT solver utilizes algorithms and heuristics that improve its performance over existing approaches. We are developing the solver to run well on commercially available HPC hardware. Our analysis shows that our algorithms combined with the low message latency of supercomputers are likely to produce a significant performance improvement over existing solvers in terms of speed and maximum problem size. Our poster will describe the algorithms we are using, illustrate our approach to problem partitioning, and present our performance analysis to date.
Google Scholar    Article

R-Stream: A Parametric High Level Compiler



Publication Source: High Performance Embedded Computing Workshop (HPEC), Lexington, MA, USA, 2006

This presentation describes the R-Stream compiler. The motivation of high level, source to source optimization is described. The process or raising code to the Generalized Dependence Graph (GDG) is identified, and then the techniques for optimization within the GDG.  Finally, the techniques for code generation from the GDG - polyhedral scanning, and importantly, the process of generating "human readable" C to allow the low level compiler to optimize.
Google Scholar    Article

Enabling Cognitive Architectures for UAV Mission Planning



Publication Source: The High Performance Embedded Computing Workshop (HPEC), Lexington, MA, USA, (Best papers award session), September, 2006

The operational performance desired for autonomous vehicles in the battlefield requires new approaches in algorithm design and computation. Our design, Polymorphic Cognitive Agent Architecture (PCAA), is a hardware-software system that supports the requirements for implementing a dynamic multi-unmanned aerial vehicle (UAV) mission planning application using cognitive architectures. We describe the requirements for our application, and discuss the challenges of using current “non-cognitive” algorithms to solve this problem and the reasons this motivates our experiment.
Google Scholar    Article

1 17 18 19 20 21