Home Up People Publications News Feedback Search

Compiler Issues

 

 

 

Processors In Memory (PIM) technology integrates processor logic and DRAM in the same chip. One interesting usage of these chips is to replace the main memory chips in a workstation or a server. In this case, PIM chips act as co-processors in memory that execute code when signaled by the host (main) processor. This class of architectures provide a heterogeneous mix of processors: host and memory processors. Host processors are more powerful, and see a higher latency to memory. Memory processors are typically less powerful, but see a lower latency to memory. A natural question is how to automatically program these architectures?

 

The goal of this research is to explore compiler and run-time adaptive execution techniques to enable automatic code mapping on to the architecture to maximize performance by exploiting heterogeneity of the architecture and parallelism. As a first step, we are exploring techniques to automatically identify code sections that give better performance when run on the PIM chips. The target applications include floating point, integer, multimedia, and object-oriented applications. This research encompasses static performance prediction, code partitioning, extraction of coarse grained parallelism, cache locality enhancement, run-time overhead reduction, dynamic load balancing, etc.

 

 

Send mail to renau@cs.uiuc.edu with questions or comments about this web site.
Last modified: September 10, 2000