Applying Design of Experiments (DOE) to Performance Evaluation of Commercial Cloud Services

Applying Design of Experiments (DOE) to Performance Evaluation of Commercial Cloud Services

Zheng Li, Liam O’Brien, He Zhang, Rajiv Ranjan
Copyright: © 2013 |Pages: 19
DOI: 10.4018/jghpc.2013070107
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Appropriate performance evaluations of commercial Cloud services are crucial and beneficial for both customers and providers to understand the service runtime, while suitable experimental design and analysis would be vital for practical evaluation implementations. However, there seems to be a lack of effective methods for Cloud services performance evaluation. For example, in most of the existing evaluation studies, experimental factors (also called parameters or variables) were considered randomly and intuitively, experimental sample sizes were determined on the fly, and few experimental results were comprehensively analyzed. To address these issues, the authors suggest applying Design of Experiments (DOE) to Cloud services evaluation. To facilitate applying DOE techniques, this paper introduces an experimental factor framework and a set of DOE application scenarios. As such, new evaluators can explore and conveniently adapt our work to their own experiments for performance evaluation of commercial Cloud services.
Article Preview
Top

Introduction

Along with the boom in Cloud computing, an increasing number of commercial providers have started to offer public Cloud services (Li et al., 2010; Prodan & Ostermann, 2009). Different commercial Cloud services have been supplied with different terminologies, qualities, and cost models (Prodan & Ostermann, 2009). Consequently, performance evaluation of those services would be crucial and beneficial for both service customers and providers (Li et al., 2010). For example, proper performance evaluation of candidate Cloud services can help customers perform cost-benefit analysis and decision making for service selection, while it can also help providers improve their service qualities against competitors. Given the diversity of Cloud services and the uncertainty of service runtime, however, implementing appropriate performance evaluation of Cloud services is not easy. In particular, since Cloud services evaluation belongs to the domain of experimental computer science, suitable experimental design and analysis would be vital for practically evaluating Cloud services (Stantchev, 2009).

Unfortunately, there seems to be a lack of effective methods for determining evaluation implementations in the Cloud computing domain. The current experimental design approaches vary significantly in the existing studies of Cloud services evaluation, and we have identified three main issues related to the current evaluation experiments. Firstly, the experimental sample sizes were determined arbitrarily, while inappropriate sample size could increase the probability of type II error in evaluation experiments (Montgomery, 2009). Secondly, most evaluators did not specify “experimental factors” when preparing evaluation experiments. In fact, identification of the relevant factors that may influence performance plays a prerequisite role in designing evaluation experiments (Jain, 1991). Thirdly, few Cloud services evaluation reports gave comprehensive analysis of experimental results. However, sound evaluation conclusions may require more objectivity by applying more statistical methods to experimental analysis (Montgomery, 2009).

To deal with these identified issues, we decided to apply Design of Experiments (DOE) strategies to performance evaluation of commercial Cloud services. DOE is traditionally applied to agriculture, chemical, and process industries (Antony, 2003; Montgomery, 2009). Considering the natural relationship between experiment and evaluation, we believe that the various DOE techniques of experimental design and statistical analysis can also benefit Cloud services evaluation. Therefore, we investigated two main activities of applying DOE: (1) selection of input factors (parameters of Cloud resources and workload) and response variables (indicators of service runtime qualities); (2) choice of experimental design and statistical analysis based on the selected factors/variables. To facilitate experimental factor selection, we established a factor framework after collecting, clarifying and rationalizing the key concepts and their relationships in the existing Cloud performance evaluation studies. To help identify suitable experimental design and analysis techniques, we performed a series of case studies to demonstrate a set of DOE application scenarios. As such, new evaluators can explore and refer to our work to design their own experiments for performance evaluation of commercial Cloud services.

Note that, as a continuation of our previous work (Li et al., 2012a, 2012b, 2012c, in press a, in press b), this study conventionally employed four constrains, as listed below:

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing