Stochastic Frontier Analysis and Cancer Survivability

Stochastic Frontier Analysis and Cancer Survivability

Ramalingam Shanmugam
Copyright: © 2014 |Pages: 12
DOI: 10.4018/978-1-4666-5202-6.ch204
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Introduction

In this globalized and highly competitive, industries are successful and profitable when their operations are technically efficient. The managers and applied economists are periodically checking the pertinent data to confirm that the production in their industry is technically efficient and cost effective. The stochastic frontier analysis (SFA) is a statistical approach to make an assessment of the technical efficiency of chosen industries in the comparison pool. The SFA is based on a linear model connecting the observed production or cost variable, a bunch covariates to predict the prediction or cost using regression concepts and two error components. What is error component? The discrepancy between an observed and its prediction is defined the error. The usual regression methodology involves just one error component to indicate the random noises which influence the observable.

The SFA is a generalization of the regression methodology because of the two error components. In SFA, the second error is accommodated to portray the technical inefficiency (TE) of the production operation and it could have an impact on the observable. These two error components, namely the random noise and the technical inefficiency do not have to be statistically independent but are assumed to be independent in SFA. The expected values of the observable in the presence and in the absence of the technical inefficiency are involved to make an efficiency score of the operation of each industry in the comparison pool. This wonderful seminal thinking was promoted first by Meeusen et al. (1977) and independently by Aigner et al. (1977) within a month. The details are explained and illustrated later in the chapter.

The SFA is not the only methodology available to compare units. There is an alternate but nonparametric methodology to SFA and it is called Data Envelopment Analysis (DEA). Later, this chapter investigates the advantages versus disadvantages of SFA and DEA in order to compare a decision making units (DMUs) with an example about survivability from the melanoma cancer in nations around the world. The chapter briefly describes the differences between the DEA and SFA. The study of the melanoma cancer data is interesting from the point of view of learning SFA and DEA as well. The healthcare researchers are so far not fond of these two powerful methodologies. The SFA methodology is formally introduced, explained and then used to comprehend the survival chance without melanoma cancer due to ultra violet radiation for residents in 45 nations around the world. In contrast to the DEA which is a deterministic methodology, the SFA is a stochastic methodology. The SFA is a powerful, parametric methodology and it is valuable to sort out big data, to bring forth essential information and to contrast DMUs.

The basis of the comparison in SFA is called technical efficiency. This is done using an underlying statistical distribution for the given data. As demonstrated in this article, most often the given data follows normal distribution and it is confirmed using what is known as Box plot. The technical efficiency of the SFA is comparable to the relative efficiency of the DEA which makes use of the mathematical programming concept. Both SFA and DEA have advantages versus disadvantages though they are alternate to each other. In the application of SFA, the data are required to have stochastic components. In the application of DEA, the data are assumed to be deterministic type. The SFA is explained and is illustrated using a stochastic data of 45 nations with their latitude, amount of falling ultra violet, survival rate among men and women from melanoma cancer around the world in 2003.

Key Terms in this Chapter

Revenue Frontier: This economic term identifies the maximum revenue collectable from the outputs due to utilization of a given bundle of inputs.

Data Envelopment Analysis (DEA): Data Envelopment Analysis is a novel decision making tool based on the principle of linear programming to compare the relative operational efficiency of a set of comparable decision making units even with multiple inputs and outputs. DEA was initially developed by Charnes, Cooper, Rhodes (1978) . The DEA provides an efficiency score with common weights of the inputs or outputs.

Relative Efficiency: This statistical term refers the comparative performance level of the decision making units based on their inputs and inputs.

Ordinary Least Square: It is a statistical algorithm to compute consistent and unbiased estimate of the parameters of the linear model.

Efficiency Score: It portrays an operational efficiency in a scale of 0 to 1, where a value of 1 indicates the decision making unit is, relatively mentioning, the most efficient. Its value less than 1 is indicative of the inefficient operation of the decision making unit. The efficiency score of varies in accordance with all inputs and outputs of all decision making units in the analysis.

Production Frontier Function: This statistical term identifies the minimum number of bundles of inputs required to produce various measurable outputs or the maximum number of measurable outputs producible due to specified input bundles.

Heteroskedasticity: It is a statistical term to indicate the lack of homogeneity among the decision making units and it occurs when the error term in the linear model is correlated with size related characteristics of the observable.

Weights: These are the statistical unknowns in the primal model to determine the importance of each among the inputs or outputs. Since a value assigned to each weight depends on the measurement scale of the input or output itself, it is difficult to compare these weights of different inputs or outputs.

Dual Cost Frontier: It identifies the minimum expenditure needed to produce a given bundle of outputs.

Complete Chapter List

Search this Book:
Reset