Efficient Strategies of VMs Scheduling Based on Physicals Resources and Temperature Thresholds

Efficient Strategies of VMs Scheduling Based on Physicals Resources and Temperature Thresholds

Djouhra Dad, Ghalem Belalem
Copyright: © 2020 |Pages: 15
DOI: 10.4018/IJCAC.2020070105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing offers a variety of services, including the dynamic availability of computing resources. Its infrastructure is designed to support the accessibility and availability of various consumer services via the Internet. The number of data centers allow the allocation of the applications, and the process of data in the cloud is increasing over time. This implies high energy consumption, thus contributing to large emissions of CO2 gas. For this reason, solutions are needed to minimize this power consumption, such as virtualization, migration, consolidation, and efficient traffic-aware virtual machine scheduling. In this article, the authors propose two efficient strategies for VM scheduling. SchedCT approach is based on dynamic CPU utilization and temperature thresholds. SchedCR approach takes into consideration dynamic CPU utilization, RAM capacity, and temperature thresholds. These approaches have efficiently decreased the energy consumption of the data centers, the number of VM migrations, and SLA violations, and this reduces, therefore, the emission of CO2 gas.
Article Preview
Top

Introduction

Cloud computing offers different services and resources that respond to user requests and facilitates treatments. These infrastructures are designed to support the accessibility and availability of various consumer services via the Internet. Recently, the number of companies and institutions migrating their services to cloud providers has rapidly increased (Baciu, Wang, & Li, 2017). To host applications, processing data in clouds; data centers consume a lot of energy; this contributes to a large emission of CO2 gas.

With the rapidly growing number of data centers in cloud computing, energy efficiency and optimization approaches have become increasingly important. In 2008; the estimate of computing resources used 40% of total demand (9,936 × 1016 Joules) for the supply of servers of all kinds. 38% of the total electricity consumption (9.439 × 1016 Joules) was for cooling; the remaining 12% was for energy distribution (Masanet, Brown, Shehabi, Koomey, & Nordman, 2011). More resources mean more energy consumption and thus higher electricity bills. Google consumed 2.68 million megawatt hours of electricity in 2011 (Patra, 2018).

In 2015, the energy consumption of data centers containing multiple infrastructures such as servers, storage systems, routers, air conditioning systems was 4% of global energy consumption. Air conditioning and cooling systems represent 40% to 50% of the data center’s energy consumption. These infrastructures of cooling are needed to reduce the heat released by the servers.

Among the causes of this high energy, it has a large number of Data Center infrastructures for data processing and storage. It also has energy-saving air conditioners to cool the high temperature of the servers and manage the heat flow released to avoid high-temperature failure (Zakarya & Gillam, 2017). Reducing this high energy consumption is essential to lower the rate of CO2 gas, which makes the environment more and more polluted, and it is a negative impact on human health. Therefore, solutions are needed to minimize energy consumption.

Several energy optimization solutions in cloud computing are used such as VM virtualization, migration and job consolidation. Another solution to increase energy efficiency is the scheduling of tasks in the various servers of the data center.

The scheduling algorithms generally aim to distribute the workload on the available machines and to optimize their utilization by minimizing total execution time and reducing power consumption (Zomaya & Teh, 2001). In these algorithms, power consumption due to computing, storage, and physical resources have an impact on the performance of data centers when allocating tasks on servers. To reduce the cost of this heavy processor utilization and the cost of cooling computer systems, the researchers must propose solutions that will not reduce only the utilization of physical resources such as CPU, RAM and server bandwidth but also the temperature generated by this important utilization of the processor.

In a server, the component that consumes the most power is the processor (CPU) followed by memory (RAM) and the PSU efficiency loss (Beloglazov, Buyya, Lee, & Zomaya, 2011a). The energy consumption of these two components increases when the workload becomes important; therefore, an optimal utilization and good scheduling of the tasks are necessary.

This article focuses on dynamic task scheduling by providing approaches based on thresholds that minimize energy consumption in a Cloud data center. This work proposes two new scheduling approaches SchedCT (Scheduler based on CPU utilization and processor temperature thresholds) and SchedCRT (Scheduler based on CPU utilization, RAM utilization, and processor temperature thresholds) to reduce data center energy consumption. The utilization of these thresholds will limit the CPU utilization, RAM utilization, and the processor temperature. Therefore, the main contributions of this paper are summarized as follows:

  • Proposed and evaluated new scheduling policies based on physical resources to reduce and predict the impact of energy consumed by data centers.

  • Efficient utilization of physical resources (CPU utilization, RAM capacity, and processor temperature) to adjust over or underused hosts.

  • Use an adequate VMs allocation algorithm to improve the utilization of resources below thresholds to minimize the number of active PMs as well as the reduction of energy consumption.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing