Selecting Indispensable Edge Patterns With Adaptive Sampling and Double Local Analysis for Data Description

Selecting Indispensable Edge Patterns With Adaptive Sampling and Double Local Analysis for Data Description

Huina Li, Yuan Ping
Copyright: © 2024 |Pages: 26
DOI: 10.4018/JCIT.335945
Article PDF Download
Open access articles are freely available for download

Abstract

Support vector data description (SVDD) inspires us in data analysis, adversarial training, and machine unlearning. However, collecting support vectors requires pricey computation, while the alternative boundary selection with O(N2) is still a challenge. The authors propose an indispensable edge pattern selection method (IEPS) for data description with direct SVDD model building. IEPS suggests a double local analysis to select the global edge patterns. Edge patterns belong to a subset of the target problem of SVDD and its variants, and neighbor analysis becomes pivotal. While an excessive number of participating data result in redundant computations, an insufficient number may impede data separability or compromise the model's quality. Consequently, a data-adaptive sampling strategy has been devised to ascertain an optimal ratio of retained data for edge pattern selection. Extensive experiments indicate that IEPS keeps indispensable edge patterns for data description while reducing the interference in the norm vector generation to guarantee the effectiveness for clustering analysis.
Article Preview
Top

Introduction

Inspired by support vector classifier, support vector data description (SVDD) (Tax & Duin, 1999) characterizes a data set by obtaining the spherically shaped boundary. Through a model built to describe the target data set, it benefits a wide range of applications, such as image description (Aslani & Seipel, 2021), novelty discovery (Hu et al., 2023), adversarial training (C. Chen et al., 2023), and machine unlearning (M. Chen et al., 2023). However, in collecting support vectors (SVs) for data description, the conventional solution conducts model training through solving a quadratic programming optimization problem. It poses a computational complexity of O(N3) where N is the number of data points. Evidently, pricey computations may significantly degrade SVDD's applicability.

Let JCIT.335945.m01 be a data set with N data points JCIT.335945.m02 where JCIT.335945.m03 in data space. The pricey model training is generally caused by solving a quadratic programming problem in terms of iterative analysis on a JCIT.335945.m04 kernel matrix. Furthermore, the number of iterative analysis is usually large and uncertain, yet a great value for the final coefficient vector JCIT.335945.m05exacerbates the practical time-cost. Efficient solver for the quadratic programming problem is the major preference for improvement, such as the solver of dual coordinate descent (Y. Ping et al., 2017). However, the computational complexity falling in the range of O(N2) and O(N3) upon the specific case is still pricey (Arslan et al., 2022). Another intuitive way of improvement is to select the most representative subset of JCIT.335945.m06. However, few works in the literature focus on the subset's representativeness or purity strongly related to SVDD. They frequently select a subset on the basis of random sampling, data geometry analysis, and neighborhood relationships. For instance, Kim et al. (2015) define a sample rate JCIT.335945.m07 to regulate the randomly selected JCIT.335945.m08 data points during model training, while Jung et al. (2010) and Gornitz et al. (2018) leverage data geometry information by incorporating k-means to partition JCIT.335945.m09 into K subsets for local model training and subsequent global mergence. However, these data points employed for model training, whether obtained through random sampling or cluster-based geometry analysis, may not accurately capture the true distribution of JCIT.335945.m10. Random sampling introduces changes in the densities of all the retained data groups that significantly impacts data separability. The circle-like pattern hypothesis employed in subsets collection for local model training may exacerbate the adverse effects of irregular cluster shapes. Despite achieving substantial efficiency improvements, these methods often result in highly unstable accuracies. As the superset of support vectors (SVs) (Y. Ping et al. 2015), boundary generally makes an equivalent contribution to the construction of demarcation hyperplanes (Chen et al., 2023). On the basis of neighborhood relationships, Aslani and Seipel (2021) introduce locality-sensitive hashing (LSH) to gather instances near decision boundaries and eliminate nonessential ones. However, it retains many inners that may be more suitable for constructing a classifier for multi-classes problems rather than describing clusters with arbitrary shapes. Furthermore, Y. Ping et al. (2015) and Y. Ping et al. (2019) utilize the boundary to directly reformulate the dual problem. Despite achieving stable performance, the boundary selection becomes computationally expensive with a large value of N.

Complete Article List

Search this Journal:
Reset
Volume 26: 1 Issue (2024)
Volume 25: 1 Issue (2023)
Volume 24: 5 Issues (2022)
Volume 23: 4 Issues (2021)
Volume 22: 4 Issues (2020)
Volume 21: 4 Issues (2019)
Volume 20: 4 Issues (2018)
Volume 19: 4 Issues (2017)
Volume 18: 4 Issues (2016)
Volume 17: 4 Issues (2015)
Volume 16: 4 Issues (2014)
Volume 15: 4 Issues (2013)
Volume 14: 4 Issues (2012)
Volume 13: 4 Issues (2011)
Volume 12: 4 Issues (2010)
Volume 11: 4 Issues (2009)
Volume 10: 4 Issues (2008)
Volume 9: 4 Issues (2007)
Volume 8: 4 Issues (2006)
Volume 7: 4 Issues (2005)
Volume 6: 1 Issue (2004)
Volume 5: 1 Issue (2003)
Volume 4: 1 Issue (2002)
Volume 3: 1 Issue (2001)
Volume 2: 1 Issue (2000)
Volume 1: 1 Issue (1999)
View Complete Journal Contents Listing