Article Preview
TopIntroduction
Outliers refer to “observations (or subsets of observations) which do not conform to the rest of this group of data” (Barnett & Lewis, 1994, p. X). Outlier detection can be applied in various fields, including network intrusion, industrial sensor failure, and financial fraud. This detection method can be applied to different data forms like batch data and streaming data.
Regarding batch data, the proportions for training, testing, and validation sets are approximately 60%, 20%, and 20%, respectively (Blog, 2022; Zhou, 2016). Thus, most outlier detection algorithms designed for batch data, such as DBSCAN (Knorr & Ng, 1998), LOF (Breunig et al., 2000), OCSVM (Schölkopf et al., 2001), CBLOF (He et al., 2003), FDPC-OF (Zhang et al., 2023), NANOD (Wahid & Annavarapu, 2021), MOD (Yang et al., 2021), DAGMM (Tra et al., 2022), and DIFFI (Carletti et al., 2023) often require a large amount of data to label or adjust parameters. This makes them impractical in real-world implementation.
Raha (Mahdavi et al., 2019) presented an outlier detection algorithm (termed “Raha[OD]”) that overcomes this drawback by labeling only a few data points, for instance, approximately 20, regardless of the total data point quantity in the batch. Raha[OD] employs the histogram method to generate a feature vector for each data point. Then, it trains the model and detects outliers. However, the limitation of the histogram method lies in ability to represent data distribution at a coarse granularity, which leads to low accuracy for Raha[OD].
For streaming data, the most important methods are distance-based and density-based, which apply to homogeneous and non-homogeneous data, respectively. For non-homogeneous data, the main methods are based on the LOF algorithm. However, due to the shortcomings of LOF, particularly its sensitivity to data distance, the accuracy of these approaches declines as dense and sparse clusters converge. In addition, both distance-based and density-based methods require parameter tuning, a time-consuming and painful process.
To address the aforementioned shortcomings, the authors propose the PDC algorithm, which is based on probability density clustering. Their specific approach involves normalizing probability density by calculating the ratio of average probability density to probability density, generating feature vectors for training a lightweight machine learning model for detecting outliers. Thanks to the accuracy, stability, and insensitivity to data distance of probability density, PDC can improve the accuracy of detection in both batch and streaming data.
In batch data, PDC only labels a few data points, approximately 20, while achieving high accuracy. In streaming data, PDC achieves high accuracy even when dense and sparse clusters converge. In the real world, it is common for these two cluster types to be interconnected. This leads to complex data distributions, such as those found in industrial sensor data or medical device data. In addition, PDC uses lightweight machine learning, which offers two benefits. First, it can save significant time without the need for tuning parameters. Second, compared with deep learning, it proves efficient enough to be applied to real-time outlier detection of streaming data.
Existing algorithms like Miline (Yamanishi et al., 2001), Takeuchi (Yamanishi & Takeuchi, 2002), Anyout (Assent et al., 2012), DAGMM (Tra et al., 2022), GC-ADS (Zou et al., 2023), and the algorithm put forward by Chenaghlou et al. (2017), do not not use the specific method of calculating the ratio of average probability density to probability density when employing a probabilistic approach. To the authors’ knowledge, this study is the first to propose this calculation method. The authors conduct comprehensive experiments using real-world datasets to demonstrate the effect of PDC. This study has the following contributions: