Performance Analysis of Hadoop YARN Job Schedulers in a Multi-Tenant Environment on HiBench Benchmark Suite

Performance Analysis of Hadoop YARN Job Schedulers in a Multi-Tenant Environment on HiBench Benchmark Suite

Kamalakant Laxman Bawankule, Rupesh Kumar Dewang, Anil Kumar Singh
Copyright: © 2021 |Pages: 19
DOI: 10.4018/IJDST.2021070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Big data processing technology marks a prominent place in today's market. Hadoop is an efficient open-source distributed framework used to process big data with fewer expenses utilizing a cluster of commodity machines (nodes). In Hadoop, YARN got introduced for effective resource utilization among the jobs. Still, YARN over-allocates the resources for some tasks of a job and keeps the cluster resources underutilized. This paper has investigated the CAPACITY and FAIR schedulers' practical utilization of resources in a multi-tenancy shared environment using the HiBench benchmark suite. It compares the above MapReduce job schedulers' performance in two scenarios and proposes some open research questions (ORQ) with potential solutions to help the upcoming researchers. On average, the authors found that CAPACITY and FAIR schedulers utilize 77% of RAM and 82% of CPU cores. Finally, the experimental evaluation proves that these schedulers over-allocate the resources for some of the tasks and keep the cluster resources underutilized in different scenarios.
Article Preview
Top

1. Introduction

Most of the users in the world have Internet service. As the number of users are increasing, everyday data is growing exponentially. Various sources, like social media, news channels, telecommunication, scientific labs, and meteorological departments, generate massive data (Lv, 2019; Dong, et al., 2014). Traditional databases cannot store enormous data, and these conventional computing models are not capable of computing massive data (Sun, He, & Lu, 2012). Computing Big data helps to uncover hidden knowledge from it (Chang, et al., 2008). Each day Google generates 2.5 terabytes of data (Dean & Ghemawat, 2008). Facebook users generate 500+ terabytes of data each day (Ghazi & Gangodkar, 2015). These industries cannot store and process Big data on a single server as it is too big to fit and too tedious to compute (Shaw, Singh, & Tripathi, 2018). So these industries use the widely adopted framework Hadoop to process Big data (Gu, et al., 2014). Hadoop is an efficient open-source framework that allows distributed storage and parallel processing of more massive data sets (Shvachko, Kuang, Radia, Chansler, & others, 2010; Chang, et al., 2008).

Hadoop is an open-source project developed by Doug Cutting and Mike in 2005 (White, 2012). It was a batch processing model, but YARN's introduction (Yet another resource negotiator) in MapReduce v2 made Hadoop more powerful in resource management and job scheduling (Ghemawat, Gobioff, & Leung, 2003). It splits the JobTracker component responsibilities into two daemons: Resource Manager and NodeManager. Hadoop has two components HDFS (Shvachko, Kuang, Radia, Chansler, & others, 2010) and MapReduce (Dean & Ghemawat, 2008).

Each MapReduce application/task has one ApplicationMaster to manage the containers and their status. The ApplicationMaster component negotiates and acquires resources from the NodeManager and ResourceManager to schedule the Map and Reduce tasks. ResourceManager allocates a set of system resources for each container (Ghazi & Gangodkar, 2015), such as CPU cores and RAM are supported. It follows a static method to allocate the resources for a container. For each task, it allocates 2 CPU cores and 4 GB RAM. This static resource allocation method over-allocates the resources for some tasks and keeps the cluster resources underutilized (Guo, Fox, Zhou, & Ruan, Improving resource utilization in mapreduce, 2012). This job scheduling strategy helps the jobs finish early (Xu & Lau, 2014). A scheduled job may or may not use the whole cluster resources and keeps some of the resources idle (Cheng, Rao, Guo, Jiang, & Zhou, 2017 & Bawankule, K. L., Dewang, R. K., & Singh, A. K., 2021). The authors (Sharma & Ganpati, 2015) studied the scheduling algorithms and evaluated their performance in Hadoop YARN on Scheduler Load Simulator (SLS). The proposed article does not test the scheduler in a multi-tenancy environment on mixed workloads to check the resource utilization. The authors (Salman, Husna, Wicaksono, & Ratna, 2018) studied the Fair and Capacity scheduler performance in a multi-tenancy environment with a mixed workload. Still, they did not vary the load condition while testing the scheduler performance and failed to raise the resource utilization issues in Hadoop YARN.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing