"Compiling Parallel Loops for High Performance Computers" - Information and Links:

Compiling Parallel Loops for High Performance Computers - Info and Reading Options

Partitioning, Data Assignment and Remapping

Book's cover
The cover of “Compiling Parallel Loops for High Performance Computers” - Open Library.

"Compiling Parallel Loops for High Performance Computers" was published by Springer US in 1993 - Boston, MA, it has 159 pages and the language of the book is English.


“Compiling Parallel Loops for High Performance Computers” Metadata:

  • Title: ➤  Compiling Parallel Loops for High Performance Computers
  • Author:
  • Language: English
  • Number of Pages: 159
  • Publisher: Springer US
  • Publish Date:
  • Publish Location: Boston, MA

“Compiling Parallel Loops for High Performance Computers” Subjects and Themes:

Edition Specifications:

  • Format: [electronic resource] :
  • Pagination: ➤  1 online resource (xv, 159 pages).

Edition Identifiers:

AI-generated Review of “Compiling Parallel Loops for High Performance Computers”:


"Compiling Parallel Loops for High Performance Computers" Description:

The Open Library:

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Open Data:

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cycl

Read “Compiling Parallel Loops for High Performance Computers”:

Read “Compiling Parallel Loops for High Performance Computers” by choosing from the options below.

Search for “Compiling Parallel Loops for High Performance Computers” downloads:

Visit our Downloads Search page to see if downloads are available.

Find “Compiling Parallel Loops for High Performance Computers” in Libraries Near You:

Read or borrow “Compiling Parallel Loops for High Performance Computers” from your local library.

Buy “Compiling Parallel Loops for High Performance Computers” online:

Shop for “Compiling Parallel Loops for High Performance Computers” on popular online marketplaces.