Article Preview
TopIntroduction
In every software development model, software quality assurance focuses on providing a software product or service that has the required level of quality to the end consumer. If, under a given environment setting, a software product does not meet the prescribed requirement specifications, then such kind of deviation from the expected result is termed as a software defect. The testing phase of Software Development Life Cycle (SDLC) is the most significant phase since it accounts for a large portion of the overall cost of the project. Therefore, it is very important to manage this step primarily in every software development process. So, a general question arises in such a situation: “How to reduce the expense of the testing stage to minimize the real cost of the project?”. Software Defect Prediction (SDP) is the only way to address this problem at the right time.
Initially, the Defect Prediction (DP) model is designed to identify “Within-Project” defects by splitting the accessible defect dataset into two chunks so that DP model is trained using one chunk of a dataset (referred as tagged cases) and the other chunk is used to check the built-in DP model. Testing the DP model involves finding tags that are either faulty or non-faulty for untagged instances in a target software dataset (Ambros et al., 2012). Cross Project Defect Prediction (CPDP) is an area of study where software project lacking sufficient local defect data can use data from other projects to create an effective and efficient defect predictor. Clearly, cross-project information must be listed before; it is applied locally to facilitate CPDP (Han et al., 2011). CPDP collects common metrics from both source (whose defect data is referred to train the DP model) and target applications (for which prediction of defects is made) (He et al., 2014). But, this raises a question about the validity of the DP model for a standard metric set, as uniform metric collection may not include some comprehensive metrics required to produce a strong DP model (Menzies et al., 2015). Nonetheless, when using Hetrogeneous Cross Project Defect Prediction (HCPDP or HDP), there is no requirement of uniform metrics between the origin and target datasets. Matched metrics can be found between two applications by calculating the coefficient of correlation between all possible metric combinations. The heterogeneous metrics showing some type of comparable distribution in their values are used to predict project-wide defects. The conceptual difference between Homogeneous CPDP and Heterogeneous CPDP is shown in Figure 1.
Figure 1. Types of Cross Project Defect Prediction
The main objective of this paper is to examine target DPC problem in a particular prediction pair of project groups. In addition to this, the researc`h study presents the problem of Partial DPC (PDPC), i.e. less than 100 percent DPC feasible by a source project for a particular target project and its recovery strategy is also proposed here. A specific software project group G1 with m number of datasets is said to have target prediction coverage by another source project group G2 having n number of datasets iff it is possible to build an effective HDP model for each dataset in G1 using atleast one dataset of G2. In other words, if dataset Si in the source project group G2 is feasible to predict defects in the target project group G1 for a Tj dataset, then (Si, Tj) is called defect coverable pair. And, if one can find defect coverable pairs for each dataset Tj (1<=j<= m) in G1 using G2 datasets, then it is said that 100% DPC goal is achieved for G1 by G2. Otherwise, G2 will have partial i.e. less than 100% of prediction coverage for G1. This issue occurs due to void set of correlated metric between a specific combination of datasets in source and target project groups. The motivation behind the research study is to identify feasible number of source project groups that can be used to develop HCPDP model for a target project having limited past defect data and heterogeneous features from all source project groups. After that, find the best project group among all feasible source projects which can be used to predict defects in target application. The source application with the highest number of matched metrics with target application will be classified as the best source project for the development of the HDP model for that target project.