New Entropy Based Distance for Training Set Selection in Debt Portfolio Valuation

New Entropy Based Distance for Training Set Selection in Debt Portfolio Valuation

Tomasz Kajdanowicz, Slawomir Plamowski, Przemyslaw Kazienko
DOI: 10.4018/jitwe.2012040105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Choosing a proper training set for machine learning tasks is of great importance in complex domain problems. In the paper a new distance measure for training set selection is presented and thoroughly discussed. The distance between two datasets is computed using variance of entropy in groups obtained after clustering. The approach is validated using real domain datasets from debt portfolio valuation process. Eventually, prediction performance is examined.
Article Preview
Top

Introduction

Supervised learning’s task is based on assumption that one is able to provide proper training data which will be used for further generalization and inference process. The quality of data directly affects performance of prediction and classification algorithms. Training data consist of a set of training examples composed of a pair of input features X and a desired output value Y.

The main aim is to analyze training data and produce an inference function Φ that maps input to output, Φ: XY. The output can be twofold: in case of discrete one, the function Φ is called a classifier. Otherwise, in case of continuous output, it is called a regression. If function Φ maps to interrelated set of more than one values it is structured prediction or structured output learning algorithm. All in all, the inferred function Φ should be able to predict the correct output value for any valid input object. This requires learning algorithm to be able to generalize based on the training data. Consequently the quality of data is of great importance. Therefore, the selection of training set for learning algorithm should be performed carefully in order to select optimal dataset. The most straightforward and clear situation arises when learning concerns data from particular domain and describes always the same stationary object. In such case, properties of data and statistical dependencies between examples remain unchanged and training may be performed using the same source of training and testing data. Such data, as long as being of appropriate size, may deliver satisfactory generalization abilities. In order to generalize from data describing non-stationary objects, learning algorithms are expected to model concept drift (Kurlej & Woźniak, 2011) phenomenon identified by changes in data probability distributions. As concept drift may be caused by changes of prior, conditional or posterior probabilities of data, appropriate methods must be incorporated to address the problem.

Another situation occurs when generalization needs to be performed for objects for which training data is not available or hardly accessible. In such case, learning is performed using data describing other similar objects. An example of such a situation are across-network classification where learning performed on one network adjust models used in generalization on another network (Lu & Getoor, 2003) or debt portfolio value prediction where value of appraisal of particular portfolio is done using other similar portfolios (Kajdanowicz & Kazienko, 2009).

The paper considers the latter problem of training set selection in the prediction task for prediction of future debt recovery. Intuitively, the greater similarity/smaller distance between objects used in learning and those the inference is applied to, the better performance of inference methods. Similarity/distance identification between training and testing objects can be reduced to similarity/distance measurement between datasets describing their input features, namely similarity/distance between Xtrain and .Xtest. Aforementioned similarity and distance can be invoked interchangeably as similarity can be measured by distance, i.e., two objects are similar if the distance is close to zero. Growing distance results in higher dissimilarity.

In general, distance is defined as a quantitative degree of how far apart two objects are (Cha, 2007). The choice of distance measure depends on the representation of objects and type of measurement. In supervised learning tasks datasets are usually represented by matrices in which columns denote attributes and rows object instances. A single cell of such matrix contains a value of particular attribute for a given instance. Hence, the problem of training set selection based on measuring the distance between two datasets Xtrain and Xtest is actually a matrix distance based selection.

Altogether, training dataset can be obtained by performing one of the following scenarios (Figure 1):

Figure 1.

Training set construction for supervised learning using closest example selection approach and closest dataset selection

jitwe.2012040105.f01

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing