An Outlier Detection Algorithm Based on Probability Density Clustering

An Outlier Detection Algorithm Based on Probability Density Clustering

Wei Wang, Yongjian Ren, Renjie Zhou, Jilin Zhang
Copyright: © 2023 |Pages: 20
DOI: 10.4018/IJDWM.333901
Article PDF Download
Open access articles are freely available for download

Abstract

Outlier detection for batch and streaming data is an important branch of data mining. However, there are shortcomings for existing algorithms. For batch data, the outlier detection algorithm, only labeling a few data points, is not accurate enough because it uses histogram strategy to generate feature vectors. For streaming data, the outlier detection algorithms are sensitive to data distance, resulting in low accuracy when sparse clusters and dense clusters are close to each other. Moreover, they require tuning of parameters, which takes a lot of time. With this, the manuscript per the authors propose a new outlier detection algorithm, called PDC which use probability density to generate feature vectors to train a lightweight machine learning model that is finally applied to detect outliers. PDC takes advantages of accuracy and insensitivity-to-data-distance of probability density, so it can overcome the aforementioned drawbacks.
Article Preview
Top

Introduction

Outliers refer to “observations (or subsets of observations) which do not conform to the rest of this group of data” (Barnett & Lewis, 1994, p. X). Outlier detection can be applied in various fields, including network intrusion, industrial sensor failure, and financial fraud. This detection method can be applied to different data forms like batch data and streaming data.

Regarding batch data, the proportions for training, testing, and validation sets are approximately 60%, 20%, and 20%, respectively (Blog, 2022; Zhou, 2016). Thus, most outlier detection algorithms designed for batch data, such as DBSCAN (Knorr & Ng, 1998), LOF (Breunig et al., 2000), OCSVM (Schölkopf et al., 2001), CBLOF (He et al., 2003), FDPC-OF (Zhang et al., 2023), NANOD (Wahid & Annavarapu, 2021), MOD (Yang et al., 2021), DAGMM (Tra et al., 2022), and DIFFI (Carletti et al., 2023) often require a large amount of data to label or adjust parameters. This makes them impractical in real-world implementation.

Raha (Mahdavi et al., 2019) presented an outlier detection algorithm (termed “Raha[OD]”) that overcomes this drawback by labeling only a few data points, for instance, approximately 20, regardless of the total data point quantity in the batch. Raha[OD] employs the histogram method to generate a feature vector for each data point. Then, it trains the model and detects outliers. However, the limitation of the histogram method lies in ability to represent data distribution at a coarse granularity, which leads to low accuracy for Raha[OD].

For streaming data, the most important methods are distance-based and density-based, which apply to homogeneous and non-homogeneous data, respectively. For non-homogeneous data, the main methods are based on the LOF algorithm. However, due to the shortcomings of LOF, particularly its sensitivity to data distance, the accuracy of these approaches declines as dense and sparse clusters converge. In addition, both distance-based and density-based methods require parameter tuning, a time-consuming and painful process.

To address the aforementioned shortcomings, the authors propose the PDC algorithm, which is based on probability density clustering. Their specific approach involves normalizing probability density by calculating the ratio of average probability density to probability density, generating feature vectors for training a lightweight machine learning model for detecting outliers. Thanks to the accuracy, stability, and insensitivity to data distance of probability density, PDC can improve the accuracy of detection in both batch and streaming data.

In batch data, PDC only labels a few data points, approximately 20, while achieving high accuracy. In streaming data, PDC achieves high accuracy even when dense and sparse clusters converge. In the real world, it is common for these two cluster types to be interconnected. This leads to complex data distributions, such as those found in industrial sensor data or medical device data. In addition, PDC uses lightweight machine learning, which offers two benefits. First, it can save significant time without the need for tuning parameters. Second, compared with deep learning, it proves efficient enough to be applied to real-time outlier detection of streaming data.

Existing algorithms like Miline (Yamanishi et al., 2001), Takeuchi (Yamanishi & Takeuchi, 2002), Anyout (Assent et al., 2012), DAGMM (Tra et al., 2022), GC-ADS (Zou et al., 2023), and the algorithm put forward by Chenaghlou et al. (2017), do not not use the specific method of calculating the ratio of average probability density to probability density when employing a probabilistic approach. To the authors’ knowledge, this study is the first to propose this calculation method. The authors conduct comprehensive experiments using real-world datasets to demonstrate the effect of PDC. This study has the following contributions:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing