Independent Tasks Scheduling using Parallel PSO in Multiprocessor Systems

Independent Tasks Scheduling using Parallel PSO in Multiprocessor Systems

Sunil Kumar Singh, Deo Prakash Vidyarthi
Copyright: © 2015 |Pages: 17
DOI: 10.4018/IJGHPC.2015040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multiprocessor system often aims to minimize the schedule length of the submitted jobs. For this, efficient scheduling of the parallel tasks on multiprocessor system is required. As the scheduling is an NP-Hard problem, meta-heuristics are often applied for this. This work applies parallel particle swarm optimization technique for multiprocessor task scheduling. As the scheduler designed using parallel PSO itself can execute in parallel on the multiprocessor systems the convergence is faster. The proposed algorithm takes care of local as well as global convergence. The performance evaluation of the proposed model is done by simulation and the result is quite encouraging.
Article Preview
Top

1. Introduction

Computer system consisting of multiple processing units is called a parallel system. Parallel system explores the parallelism available in the application for efficient parallel execution. It aims to speed up the execution of an application with collaboration of the multiple available processing units. With the introduction of multi-core processors, even mainstream PCs have become parallel system up to some extent.

The complexity of parallel programming motivates research into automatic parallelization techniques and tools. One particularly difficult part of automatic parallelization is the scheduling of the tasks onto the processors. Two types of possible scheduling are dynamic and static. In dynamic scheduling, the decision that which processor executes a task and when is controlled by the runtime system. This is practical mostly for independent tasks. In contrast, static scheduling, also called mapping, refers to processor allocation in which the ordering of the tasks are determined at compile time. The advantage of static scheduling is the inclusion of the dependencies and communication among the tasks in its scheduling decisions.

Parallel system has two types of memory architectures; shared and distributed. In shared memory architecture, all processors are connected to a single global memory and accessing a single global address space leads to fast data sharing. In distributed memory system, each processor has its own memory and programmer is responsible for many details of the communication system. Sometimes it has predefined communication system also. A shared memory parallel system is known as multiprocessor system.

An application, to be executed in a multiprocessor system, is divided into parallel tasks or modules so that it can exploit the benefit of multiple processors present in the system and the application can be executed in parallel. One of the critical problems, to be addressed in this context, is how to efficiently allocate the parallel tasks amongst the given processors so as to distribute the computational load as evenly as possible and aim to minimize the finishing time of the application. This is referred as multiprocessor scheduling problem.

Scheduling in multiprocessor system is different from uniprocessor scheduling. In uniprocessor system, the single available processor switches between multiple jobs whereas in multiprocessor the job itself is divided into many tasks for parallel execution. Scheduling in multiprocessor is mapping of the tasks of a given job onto the processors of such systems in order to optimize some characteristic parameters. To achieve the maximum benefits of such systems, scheduling is an important and a non-trivial issue.

Particle swarm optimization (PSO) is a meta-heuristic technique applied for an optimization problem that involves a large search space. This technique considers swarm of several particles as the potential solution of the problem (Anbar & Vidyarthi, 2011). Each particle is depicted by two parameters; position and velocity. Position convey the specific location of the particle and velocity controls the movement of the particle. Parallel PSO, an extension of the PSO, consists of group of swarms which evolves by communicating and collaborating with each other. This not only results in good solutions but due to its parallel nature converges faster.

Few related multiprocessor scheduling algorithms, discussed in the literature, are as follows. Longest processing time first (LPTF) is a deterministic algorithm (Adams, Balas, & Zawack, 1988; Omidi & Rahmani, 2009) in which the tasks are sorted in descending order of their processing time for their execution. A novel Heuristic PSO algorithm is given by Ali Omidi (Omidi & Rahmani, 2009) for task scheduling on multiprocessor. In this, scheduling is done by tuning the parameters with position and velocity for minimizing the scheduling length. Hsiu-Jy Ho and Wei-Ming Lin (Ho & Lin, 2010) have devised a scheduling algorithm for multiprocessor systems with Autonomous Performance-Optimizing control. In this, they have proposed some non-blocking scheduling algorithms. This work presents the quick minimization of scheduling length for a given job consisting of tasks using the Parallel PSO.

The outline of the paper is as follows. After an introduction in section 1, section 2 briefs the Parallel PSO techniques. Section 3 & 4 describes the problem as well as the proposed model. In section 5, a small example has been illustrated using the model. Section 6, depicts the experimental evaluation of the proposed model followed by concluding remarks in section 7.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing