Article Preview
Top1. Introduction
Virtualization is becoming persistent in massive data centers, cloud computing, and enterprise infrastructure, motivated by a number of significant benefits, such as theatrical cost reduction, enlarged application availability and further well-organized IT management. According to Gartner, today, 25% of installed server workloads are virtualized. IDC even forecasts that, by 2014, more than 70% of applications on newly distributed servers will run in virtual machines. However, in a virtualized environment, competent and effectual memory resource management is silent a demanding problem. In this paper we recommend a memory resource balancing method to develop performance and memory resource consumption for center-wide virtualized computing. We show that our elucidation can correctly monitor memory demand of each virtual machine with very low operating cost and can successfully improve overall system performance. Virtualization technologies like Xen (Anselmi, Amaldi, & Cremonesi, 2008), VMware (Tam, Azimi, Soares, & Stumm, 2009), and Denali (Yang, Hertz, Berger, Kaplan, & Moss, 2004) have turn into a common generalization layer in contemporary data centers. They facilitate multiple operating systems to run on their own virtual machines separately. Figure 1 illustrates an example, where the hypervisor multiplexes the hardware of a single physical machine with several virtual machines and a guest Operating System executes inside each virtual machine separately. One of the major benefits of using virtualization is server consolidation. It is not unusual to achieve a 15-to-1 or even higher consolidation ratio (Moltó, Caballer, Romero, & de Alfonso, 2014). For a data center that hosts a large number of servers, this can successfully save power consumption, floor space possession and air conditioning costs. In addition, virtualization can advance ease of use by live migration (Barham, Dragovic, Fraser et al., 2003). When one physical server falls short or wants maintenance, the virtual machines it hosts can be clearly migrated to another physical machine with insignificant application downtime. The core of virtualization is the virtual machine monitor which is also called hypervisor. VMM is accountable for building and organization multiple instances of virtual hardware platforms. A bundle of physical resources like CPUs or network interface cards can be multiplexed in a time-sharing manner. However, the memory system is shared all the way through address space partitioning. That is, each virtual machine is allocated with a fixed amount of address space of physical memory. However, conflicting from how a resident Operating System administers virtual memory and physical memory for its processes, for the purpose of fidelity, the VMM is not actively involved in memory management of each Virtual Machine. More particularly, when created, each VM is allocated with a fixed amount of physical memory. Then, it is the guest Operating System’s job to supervise that amount of physical memory without the involvement of the hypervisor. As a result, the hypervisor is unaware of memory demand of VMs and powerless to dynamically balance memory resources. In our solution, we first design a low cost but accurate Least Recently Used based working set size tracking scheme as the basis of memory resource balancing. The Least Recently Used based working set size form associates memory allocation size and performance impact. Based on the form, we propose a local memory balancing method, which dynamically amend memory allocation amount via ballooning (Anselmi, Amaldi, & Cremonesi, 2008; Sapuntzakis, Chandra, Pfaff, Chow, Lam & Rosenblum, 2002) on a single physical machine. Then it is unmitigated to global surroundings, where the physical memory of all interconnected machines is balanced via live migration and remote caching. To the greatest of our knowledge, our work uniquely coordinates the global memory balancing practices with a local balancing scheme. Figure 2 shows the overview of our solution.