Article Preview
TopI. Introduction
A cloud computing system is the arrangement of shared pooling of computing resources in the sophisticated data center known as a cloud data center and, computing resources are available to users as services. Services can be accessed using virtualization technology. Users can access and manage services through interfaces such as the web browser, mobile app, thin client, terminal emulator, remote desktop, anytime, anywhere in the world. Logically, any cloud computing model is divided into two primary blocks: the frontend and backend blocks. Both blocks are connected through a communication channel; usually, the Internet is used to connect. The frontend block represents the clients, i.e., the cloud users. The backend block represents the cloud itself. That is pooling computing resources like servers, memory, and storage in the physical or virtual form in the backend block (Sosinsky, B., 2011, Stallings, W., 2016, Rajput, R. S. & Pant, A., 2018).
Resource management in cloud computing is significant and a magnetic domain of research. All entities part of the cloud computing scenario can be resources. Resources include computing, storage, networking, protocol, and energy or anything which is directly or indirectly associated with the set of cloud applications (Manvi, S. S. & Krishna, S. G., 2014).
Up to the year 2030, the energy consumption of data centers will be increased several times as compared to the present. A significant improvement in cooling infrastructure and an uninterruptible power supply will also be required (Hintemann, R. &Hinterholzer, S., 2019).The power management techniques for the data centers are classified into five categories DVFS (Dynamic Voltage and Frequency Scaling), DPM (Dynamic Power Management), Task Scheduling based techniques, Thermal-aware techniques, and Virtualization. In the DVFS, the clock frequency of a processor is dynamically adjusted to reduce the supply voltage to accomplish power saving. DPM is based upon dynamically powering the on/off method for electronic devices after predicting workloads. In Task scheduling techniques, work will be assigned intelligently among selected servers to save energyand cooling requirements. Thermal-aware techniques work in a thermal-unaware manner so that less energy is required for cooling operations. In Virtualization, the technique allows the sharing of one physical server among multiple Virtual Machines (VMs) to reduce energy consumption in data centers. (Mittal, S., 2014).
There are two possible approaches to formulating cloud resource management; analytical and empiricalapproaches. In the empirical approach,use measurements from an actual cloud infrastructure. However, the empirical approach may have limitations, e.g., limited use,fewer users, a limited number of resources, and limited variety of cloud resources and cloud services, and cost involvement in the repetition of work to acquire data. The analytical approach uses quantitative techniques and mathematical models(Pitts, J.M. &Schormans, J.A., 2000).
In the present work, we use analytical procedures to formulate methodology to estimate cloud data center energy utilization; the proposed framework is shown in Figure 1; it has two stacks, one from an empirical approach and another is its corresponding analytical approach to estimation energy utilization.
Figure 1.
Framework to estimate cloud data center energy utilization
The present work is intended to develop models for optimizing power consumption in the cloud data center, model will be used for decision-making in the context of power consumption in the cloud computing environment. The present energy-aware solution tries to minimize power consumption through various mechanisms, including service optimizations.