Mixed Parallel Programming Models Using Parallel Tasks

Mixed Parallel Programming Models Using Parallel Tasks

Joerg Duemmler, Thomas Rauber, Gudula Ruenger
Copyright: © 2010 |Pages: 30
DOI: 10.4018/978-1-60566-661-7.ch011
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Parallel programming models using parallel tasks have shown to be successful for increasing scalability on medium-size homogeneous parallel systems. Several investigations have shown that these programming models can be extended to hierarchical and heterogeneous systems which will dominate in the future. In this chapter, the authors discuss parallel programming models with parallel tasks and describe these programming models in the context of other approaches for mixed task and data parallelism. They discuss compiler-based as well as library-based approaches for task programming and present extensions to the model which allow a flexible combination of parallel tasks and an optimization of the resulting communication structure.
Chapter Preview
Top

Introduction

Large modular parallel applications can be decomposed into a set of cooperating parallel tasks. This set of parallel tasks and their cooperation or coordination structure are a flexible representation of a parallel program for the specific application. The flexibility in scheduling and mapping the parallel tasks can be exploited to achieve efficiency and scalability on a specific distributed memory platform by choosing a suitable mapping and scheduling of the tasks. Each parallel task is responsible for the computation of a specific part or module of the application, and can be executed on an arbitrary number of processors. The terms multiprocessor tasks, malleable tasks and moldable tasks have been used to denote such parallel tasks. In the following, we use the term multiprocessor task (M-task). An M-task can be implemented using an SPMD programming model (basic M-task) or can be hierarchically composed of other M-tasks and thereby support nested parallelism (composed M-task). The advantage of the M-task programming model is to exploit coarse-grained parallelism between M-tasks and fine-grained parallelism within basic M-tasks in the same program and thus the potential parallelism and scalability can be increased.

Each M-task provides an interface consisting of a set of input and output parameters. These parameters are parallel data structures that are distributed among the processors executing the M-task according to a predefined distribution scheme, e.g. a block-wise distribution of an array. A data dependence between M-tasks M1 and M2 arises if M1 produces output data required as an input for M2. Such data dependencies might lead to data re-distribution operations if M1 and M2 are executed on different sets of processors or if M1 produces its output in a different data distribution than expected by M2. Control dependencies are introduced by coordination operators, e.g. loop constructs for the repeated execution of an M-task or constructs for the conditional execution of an M-task. The data and control dependencies between M-tasks can be captured by a graph representation. Examples are macro dataflow graphs(Ramaswamy, Sapatnekar, & Banerjee, 1997) or series-parallel (SP) graphs(Rauber & Rünger, 2000).

The actual execution of an M-task program is based on a schedule of the M-tasks that has to take the data and control dependencies into account. M-tasks that are connected by a data or control dependence have to be executed subsequently. For independent M-tasks both, a concurrent execution on disjoint processor groups or an execution one after another are possible. The optimal schedule depends on the structure of the application and on the communication and computing performance of the parallel target platform. For the same application a pure data parallel schedule that executes all M-tasks consecutively on all available processors might lead to the best results on one platform but a mixed task and data parallel schedule may result in lower execution times on another platform. Thus, the parallel programming with M-tasks offers a very flexible programming style exploiting different levels of granularity and making parallel programs easily adoptable to a specific parallel platform.

Key Terms in this Chapter

Scheduling: Scheduling defines an execution order of the tasks of an application and an assignment of tasks to processing units. Scheduling for mixed parallel applications additionally has to fix the number of executing processors for each data parallel task.

M-Task: An M-task is a parallel program fragment that operates on a set of input parameters and produces a set of output parameters. The implementation of an M-task supports an execution on an arbitrary number of processors.

Data Parallelism: Data parallel computations apply the same operation in parallel on different elements of the same set of data.

CM-Task: A CM-task is an extension of an M-task that additionally supports data exchanges with other CM-tasks during its execution.

Mixed Parallelism: Mixed parallelism is a combination of task and data parallelism that supports the concurrent execution of independent data parallel tasks each operating on a different set of data.

Task Parallelism: Task parallel computations consist of a set of different tasks that operate independently on different sets of data.

Mapping: Mapping assigns specific physical processing units, e.g., specific cores of a multi-core SMP cluster, to tasks of an application.

Complete Chapter List

Search this Book:
Reset