Article Preview
Top1. Introduction
Some decades earlier small and medium enterprises were unable to perform high performance computing (HPC) because of the huge upfront cost of supercomputers. The advent of cloud computing, however, has reduced the cost of HPC. Grid computing is built up for highly computational intensive scientific applications, but cloud computing goes one step ahead by providing dynamic resource provisioning and resource sharing through virtualization (Foster et al., 2001). Cloud computing provides cost-efficient server based computing (Yelick et al., 2011). The model is known as “pay as you go model” i.e. the customers can take virtual resources on rent and pay for what they really consume (Haidri et al., 2014; Landis et al., 2013; Sosinsky, 2010). Based on the abstraction level of the services (Haidri et al., 2014; Buyya et al., 2008; Badger et al, 2011), cloud delivery models can be classified into three types: 1) Infrastructure as a Service (IaaS) where users can use the services in the form of hardware platform so that they can deploy their VMs which support their applications, 2) Platform as a Service (PaaS), a software platform already installed in an infrastructure for hosting applications to help the users to build their applications (Wickremasinghe et al., 2010) and 3) Software as a Service (SaaS) being the last level where real application is provided to customers. There are four kinds of cloud deployment model (Sosinsky, 2010; Badger et al., 2011). 1) Public cloud: located on the premises of the cloud provider, devoted to a particular organization, 2) Community cloud: which provides services to organizations having common functions and 3) Hybrid cloud: offering a combination of private, public and community clouds.
But before exploiting the features of the cloud, there are several challenges which need to be resolved (Dillon et al., 2010). These issues are interoperability, legal and compliant, QoS, elasticity, load balancing, security, and data management (Heiser et al., 2008; Wang et al., 2011; Rimal et al., 2009). Load Balancing, among aforementioned challenges, is one of the main concern in cloud computing. The cloud platform has the advantage of being able to be quickly scaled up and down at any moment. This dynamic environment demands a novel load balancing algorithm for customer satisfaction, optimization of the rate of revenue, along with minimization of turnaround time. The challenging issue with the above-mentioned goal is how to dispatch incoming requests to VMs in a heterogeneous cloud environment.
In this work, a receiver initiated deadline aware load balancing strategy (RDLBS) is proposed to balance the load among the VMs by migrations, aiming to meet the deadline and optimizes the rate of revenue. Proposed strategy guarantees the customer satisfaction in the form of deadline meet (DM) and cost of running applications in the form of total gain (TG). Other QoS parameter such as turnaround time (TAT) is computed for the performance evaluation. Receiver initiated denotes that the load balancing process starts when the VM becomes underloaded. The performance of the proposed strategy is measured by comparing with its peers such as Conductance, Max-Min, Min-Min, Longest Job on Fastest Resource (LJFR-SJFR), and Round Robin (RR) algorithms by using CloudSim simulator.