Locally-Adaptive Naïve Bayes Framework Design via Density-Based Clustering for Large Scale Datasets

Locally-Adaptive Naïve Bayes Framework Design via Density-Based Clustering for Large Scale Datasets

Faruk Bulut
DOI: 10.4018/978-1-7998-3299-7.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, local conditional probabilities of a query point are used in classification rather than consulting a generalized framework containing a conditional probability. In the proposed locally adaptive naïve Bayes (LANB) learning style, a certain amount of local instances, which are close the test point, construct an adaptive probability estimation. In the empirical studies of over the 53 benchmark UCI datasets, more accurate classification performance has been obtained. A total 8.2% increase in classification accuracy has been gained with LANB when compared to the conventional naïve Bayes model. The presented LANB method has outperformed according to the statistical paired t-test comparisons: 31 wins, 14 ties, and 8 losses of all UCI sets.
Chapter Preview
Top

Introduction

As a probability-based classifier, Naïve Bayes has a worldly wide fame in supervised learning area due to its velocity, simplicity, effectiveness, high accuracy rates, and white-box structure. On the other hand, it always creates a general conditional probability assumption for the whole dataset. Creating a general hypothesis for a huge dataset is usually inconvenient for overall performance. Hence, this situation might be violated in practical applications. Some sub-regions of the same dataset might be handled differently from others. Rather than a generalized probability rule for the entire dataset, it will be better to handle the local density based regions separately.

In supervised learning, Naive Bayes classifier (NB) is a member of probabilistic classifier family based on Bayesian theorem. This classifier depends on the strong (naive) independence assumptions between features. Naive Bayes is a simple and common technique for constructing a classifier. As an eager learner, NB method is easy to implement, fast in prediction, quick in constructing a generalized rule. It also performs well in multi-class prediction. With the less training data points, NB technique in most cases might outperform other models such as logistic regression and Multi-Layer Perceptrons (MLP). In other words, NB classifier gives high bias and low variance especially when the dataset is sparse. Additionally, NB gives more accurate predictions in case of categorical input variables compared to real variables. For numerical variables, normal distribution is assumed such as the bell curve as a strong assumption. Furthermore, NB is accepted as quite robust to both irrelevant and noisy data.

However, NB method has some weaknesses in some cases. If a categorical variable is not observed in predefined training dataset, the system will produce a 0 (zero) probability. In this circumstance, prediction becomes impossible due to the “Zero Frequency”. Zero probabilities are generally painful for the NB Classifier. Therefore, this makes a barrier to predict classes of new records. The “Zero Frequency” problem is to be solved by some smoothing methods such as Laplace (Additive) and Lidstone methods (Kikuchi et al., 2015).

Another constraint of NB is the assumption of independent estimators. In real life applications, it is nearly impossible to set prediction algorithms which are completely independent from each other. Also, if some pairs of attributes in a dataset have strong positive or negative correlations, apparently NB gives worse predictions (Par et al., 2019).

The main problem for the classical NB method is the constructed general and globalized hypothesis for the entire dataset. A global approximation is used throughout the classification phase. The local regions and the location of the query point are not important in this case. However, some parts in the same dataset can be statistically different from others in numerous aspects. A collection of points in a dataset might have some inner complex parts. Hence, this type of a dataset needs to be decomposed into less complex sub-regions. Datasets containing complex sub-regions are not always applicable in creating a unique global approach. That is why there might exist a combination of local models for a dataset containing complex structures.

When the dataset is very huge to learn, it will be better to divide the set into local regions due to the benefits of divide and conquer approach. For this purpose, each sub-region which has some individual, local, and different specifications from others should be accepted as a unique and a discrete dataset.

Complete Chapter List

Search this Book:
Reset