Performance Enhancement of Cloud Datacenters Through Replicated Database Server

Performance Enhancement of Cloud Datacenters Through Replicated Database Server

Sudhansu Shekhar Patra, Veena Goswami
Copyright: © 2022 |Pages: 23
DOI: 10.4018/JITR.299948
Article PDF Download
Open access articles are freely available for download

Abstract

Cloud computing has risen as a new computing paradigm providing computing, resources for networking, and storage as a service across the network. Data replication is a phenomenon which brings the available and reliable data (e.g., maybe the databases) nearer to the consumers (e.g., cloud applications) to overcome the bottleneck and is becoming a suitable solution. In this paper, the authors study the performance characteristics of a replicated database in cloud computing data centres which improves QoS by reducing communication delays. They formulate a theoretical queueing model of the replicated system by considering the arrival process as Poisson distribution for both types of client request, such as read and write applications. They solve the proposed model with the help of the recursive method, and the relevant performance matrices are derived. The evaluated results from both the mathematical model and extensive simulations help to study the unveil performance and guide the cloud providers for modelling future data replication solutions.
Article Preview
Top

1. Introduction

Cloud computing is the increasingly popular platform that offers enormous opportunities for the distribution of online services is an appeal to the ICT service providers. It provides utility services, sharing resources of scalable data centres (Buyya et al. (2008)); Hussain et al. (2013). To some extent, cloud computing is a new distributed Internet technology in the market, providing services such as grid computing, distributed computing, utility computing, and software-as-a-service, that have received many significant research direction and the commercial implementations. There are many companies such as Amazon, Google, which provides the services to their clients through their data centres and offers cloud computing services and infrastructure products. The end users get the benefit by accessing data from the gadget of services and global service, by backups managed centrally, high computational power and pay-per-use billing strategies (Khalaj et al. (2016)). It benefits by the efficient utilization of data centres, data centre power strategies, large scale virtualized resources and optimized software flocks. In a survey done in 2010, studies show that the data centres consumed around 1.1-1.5% of global electricity consumption and between 1.7 and 2.2% for the United States. In 2022, datacentre consumption will be almost 175TWh (Ni and Bai (2017)). There need to be an over-provisioning of computing, storage and cooling resources, power distribution to ensure a high degree of reliability in datacentres (Bertoldi et al. (2017)). Power distribution and cooling systems consume around 15 and 45% of the total energy consumption respectively, whereas leaving approximately 40% to the IT equipment (Avgerinou et al. (2017)). This 40% energy which consumed in the IT equipment shared among the computing servers and the networking equipment.

The communication network takes 30% to 50% of the total energy used by the IT equipment depending on the data centre load level (Koomey et al. (2011)). In this paper, we discuss the opportunities and limitations of deploying data management challenges on these emerging cloud computing platforms (e.g., Google web services, Amazon Web Services, etc.). The large scale decision support systems, application-specific data marts and data analysis tasks take advantage of cloud computing platforms instead of operational, traditional transactional database systems. There are many approaches for making the datacenter to consume less power. The two main significant techniques are shutting the components down when there is a minimum load in the data centre or by scaling down their performance. Both approaches can be applied to computer servers (Srikantaiah et al. (2008); Travostino et al. (2006)) as well as to network switches (Cao et al. (2009); Buyya et al. (2009)). The bottom-line performance of cloud computing applications, for example, audio and video conferencing, gaming, online office services, social networking, storage, backup bets mainly on the reliability, availability and efficiency of high-performance network resources (Sindhu and Mukherjee (2011)). To build higher credibility and low latency service provisioning the data resources should be replicated or made nearer to the physical infrastructure, where the cloud applications are running. In the literature, many researchers have proposed a large number of replication strategies for the cloud data centres, and one may refer to Soltesz et al. (2007); Sotomayor et al. (2009). These strategies can be able to optimize the system bandwidth and data availability through geographically distributed data centres. However, none of these articles focuses the energy efficiency or on minimization of the energy consumption or the replication strategies by the data centres. In this paper, we present a data replication technique for cloud computing data centres which minimizes the energy consumption and increases the energy efficiency, optimizes the network bandwidth and decreases the communication delay among the geographically distributed data centres and also inside each datacenter (Sasikumar et al. (2020)). Specifically, our contributions in this can be summarized as follows.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing