- 著者
- Model-Driven Mapping of Computation onto Distributed Memory , Parallel Computers
- タイトル
- Alan Sussman
- 日時
- Sep 1991
- 概要
- Two fundamental problems must be addressed in automatically and
efficiently mapping programs onto a parallel machine.
The first is to find the parallelism available in the programs
and the second is to exploit that parallelism and efficiently
employ the resources of the target machine.
The problem of finding available parallelism has been, and
continues to be, extensively explored by many other researchers.
However, this thesis addresses the second of the problems,
exploiting parallelism, in the context of building a mapping
compiler for a distributed memory parallel machine.
This thesis describes using execution models to drive the
process of mapping the parallelism available in a program, in
the most efficient way, onto a particular machine.
This thesis presents a general framework for building execution
models for structured mapping techniques on a distributed
memory parallel machine.
If the execution models can accurately predict the performance
of a program (or program segment) that uses a particular mapping technique, then a mapping compiler can select the best mapping
technique from among those applicable.
We show how to apply general techniques for mapping programs
onto a linear processor array, through analysis of the execution
models obtained for several forms of data and computation
partitioning.
The analysis compares competing mapping techniques that can be
applied to the same program, by studying the sensitivity of the
techniques to changes in several parameters of the machine and
the program to be mapped, to see how such changes affect the
execution time of the mapped program.
These parameters include machine characteristics, such as the
number of processors, program characteristics, such as the size
of data sets and the amount of computation in the body of a
loop, and combined characteristics such as the amount of
overlap between computation and communication on a processor.
Execution models can be used effectively to guide a mapping
compiler, as shown by our implementation of a mapping compiler,
for the applicative programming language Sisal on the Warp
systolic array machine, which uses the models to select among
competing mapping techniques.
This thesis presents descriptions of the mapping techniques
that are effective on a linear systolic array machine and
execution models for those mapping techniques on the machine.
The results from benchmark programs show that the models can
accurately predict the performance of mapped programs on the
systolic array, despite the presence of some uncertainty in
modeling the parameters of the hardware and the behavior of the
underlying systems software of the systolic array machine.
Therefore, while the level of abstraction that we employ to
build execution models is high enough to ease their
construction, it is also concrete enough so that the models can
be used in a practical mapping compiler.
- カテゴリ
- CMUTR
Category: CMUTR
Institution: Department of Computer Science, Carnegie
Mellon University
Abstract: Two fundamental problems must be addressed in automatically and
efficiently mapping programs onto a parallel machine.
The first is to find the parallelism available in the programs
and the second is to exploit that parallelism and efficiently
employ the resources of the target machine.
The problem of finding available parallelism has been, and
continues to be, extensively explored by many other researchers.
However, this thesis addresses the second of the problems,
exploiting parallelism, in the context of building a mapping
compiler for a distributed memory parallel machine.
This thesis describes using execution models to drive the
process of mapping the parallelism available in a program, in
the most efficient way, onto a particular machine.
This thesis presents a general framework for building execution
models for structured mapping techniques on a distributed
memory parallel machine.
If the execution models can accurately predict the performance
of a program (or program segment) that uses a particular mapping technique, then a mapping compiler can select the best mapping
technique from among those applicable.
We show how to apply general techniques for mapping programs
onto a linear processor array, through analysis of the execution
models obtained for several forms of data and computation
partitioning.
The analysis compares competing mapping techniques that can be
applied to the same program, by studying the sensitivity of the
techniques to changes in several parameters of the machine and
the program to be mapped, to see how such changes affect the
execution time of the mapped program.
These parameters include machine characteristics, such as the
number of processors, program characteristics, such as the size
of data sets and the amount of computation in the body of a
loop, and combined characteristics such as the amount of
overlap between computation and communication on a processor.
Execution models can be used effectively to guide a mapping
compiler, as shown by our implementation of a mapping compiler,
for the applicative programming language Sisal on the Warp
systolic array machine, which uses the models to select among
competing mapping techniques.
This thesis presents descriptions of the mapping techniques
that are effective on a linear systolic array machine and
execution models for those mapping techniques on the machine.
The results from benchmark programs show that the models can
accurately predict the performance of mapped programs on the
systolic array, despite the presence of some uncertainty in
modeling the parameters of the hardware and the behavior of the
underlying systems software of the systolic array machine.
Therefore, while the level of abstraction that we employ to
build execution models is high enough to ease their
construction, it is also concrete enough so that the models can
be used in a practical mapping compiler.
Number: CMU-CS-91-187
Bibtype: TechReport
Month: Sep
Author: Model-Driven Mapping of Computation onto Distributed Memory
Parallel Computers
Title: Alan Sussman
Year: 1991
Address: Pittsburgh, PA
Super: @CMUTR