Dynamic-Based Clustering for Replica Placement in Data Grids

Dynamic-Based Clustering for Replica Placement in Data Grids

Rahma Souli Jbali, Minyar Sassi Hidri, Rahma Ben-Ayed
DOI: 10.4018/IJSSMET.2019100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data grids allow the placing of data based on two major challenges: placement of a large mass of data and job scheduling. This strategy proposes that each one is built on the other one in order to offer a high availability of storage spaces. The aim is to reduce access latencies and give improved usage of resources such as network, bandwidth, storage, and computing power. The choice of combining the two strategies in a dynamic replica placement and job scheduling, called ClusOptimizer, while using MapReduce-driven clustering to place a replica seems to be an appropriate answer to the needs since it allows us to distribute the data over all the machines of the platform. Herein, major factors which are mean job execution time, use of storage resources, and the number of active sites, can influence the efficiency. Then, a comparative study between strategies is performed to show the importance of the solution in replica placement according to jobs' frequency and the database's size in the case of biological data.
Article Preview
Top

Introduction

Due to the scientists demands on very high computing power and storage capacity (Guerfel et al., 2017), the data grids seem a solution to meet this growing demand. Indeed, these architectures make it possible to add the hardware/software resources offering a virtually infinite storage and computation capacities. However, the design of distributed applications for data grids remains complex.

Data Grid is geographically distributed environment that deals with large-scale data-intensive applications (Elkhatib & Edwards, 2015). In data grid, faults and disconnections of machines are common and can lead to data loss. Therefore, it is necessary to take into account the dynamic nature of the grids since the various data may disappear at any time. To meet the needs for scalability, fast access, most Data Grids support data replication to point within the distributed storage architecture. The use of replicas allows multiple users faster data access while conserving the bandwidth since replica can often be placed strategically close sites where users need them.

A good job scheduling (Nedaei, 2018) can reduce the amount of transferred data by placing a job to where the needed data are present. The decision of where and when to execute a job is made by considering its requirement and current status of the Grid such as computational, storage and network resources. Vice versa, replication will offer a faster access to required files by grid jobs, hence it increases the performance of job execution (Huang et al., 2013).

For a long time, data replication and job scheduling problems have been studied separately and those in Data Grids have just recently received devotion from researchers. Job scheduling has its own complicated features since it deals with a large amount of input data in the dynamic environment of Grids.

In order to best exploit the available resources in Data Grid, it seems necessary to design a strategy combining job scheduling and dynamic replica placement. The work presented in this paper is a solution to this problem. It consists of combining the two concepts in a dynamic strategy, called ClusOptimizer, while basing on MapReduce-driven clustering. The use of MapReduce driven can optimize the cost of data transfer and the tasks’ execution while organizing the jobs’ scheduling.

The objective is to design and implement an optimal dynamic data replication for job scheduling strategy based on parallel clustering. OptorSim simulator has been chosen (Datagrid, 2014) because it shows its usefulness as a grid simulator both in its current features and in the case of adaptability with new scheduling and replication strategies.

This paper focuses mainly on highlighting the main challenges of the high-performance computing. A thorough research on related technologies and also problem analysis are made. This paper proposes ClusOptimizer concept that consists of the replica placement and the job scheduling. The goal of ClusOptimizer is optimized of the cost of data transfer and task execution. The efficiency of the proposed algorithms is detailed in this article. In addition, the latest optimizer was used as a comparison with the experiment. So, the performance of the proposed optimizer is proved in the paper.

The distributed data, the large scale of grid and the dynamite sites cause the problem of remote access and data availability in Bioinformatics. These settings are extremely important in the context of biological grids where the processing time is very short and the user access is common.

In order to not exceed the threshold of processing time, it is necessary to maintain the data available at all times according to the frequency of the requests. For this purpose, the set of replication placement must be proposed in this type of architecture to prove the performance of the proposed replica placement strategy for Bioinformatics.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing