Article Preview
TopIntroduction
Inspired by support vector classifier, support vector data description (SVDD) (Tax & Duin, 1999) characterizes a data set by obtaining the spherically shaped boundary. Through a model built to describe the target data set, it benefits a wide range of applications, such as image description (Aslani & Seipel, 2021), novelty discovery (Hu et al., 2023), adversarial training (C. Chen et al., 2023), and machine unlearning (M. Chen et al., 2023). However, in collecting support vectors (SVs) for data description, the conventional solution conducts model training through solving a quadratic programming optimization problem. It poses a computational complexity of O(N3) where N is the number of data points. Evidently, pricey computations may significantly degrade SVDD's applicability.
Let be a data set with N data points where in data space. The pricey model training is generally caused by solving a quadratic programming problem in terms of iterative analysis on a kernel matrix. Furthermore, the number of iterative analysis is usually large and uncertain, yet a great value for the final coefficient vector exacerbates the practical time-cost. Efficient solver for the quadratic programming problem is the major preference for improvement, such as the solver of dual coordinate descent (Y. Ping et al., 2017). However, the computational complexity falling in the range of O(N2) and O(N3) upon the specific case is still pricey (Arslan et al., 2022). Another intuitive way of improvement is to select the most representative subset of . However, few works in the literature focus on the subset's representativeness or purity strongly related to SVDD. They frequently select a subset on the basis of random sampling, data geometry analysis, and neighborhood relationships. For instance, Kim et al. (2015) define a sample rate to regulate the randomly selected data points during model training, while Jung et al. (2010) and Gornitz et al. (2018) leverage data geometry information by incorporating k-means to partition into K subsets for local model training and subsequent global mergence. However, these data points employed for model training, whether obtained through random sampling or cluster-based geometry analysis, may not accurately capture the true distribution of . Random sampling introduces changes in the densities of all the retained data groups that significantly impacts data separability. The circle-like pattern hypothesis employed in subsets collection for local model training may exacerbate the adverse effects of irregular cluster shapes. Despite achieving substantial efficiency improvements, these methods often result in highly unstable accuracies. As the superset of support vectors (SVs) (Y. Ping et al. 2015), boundary generally makes an equivalent contribution to the construction of demarcation hyperplanes (Chen et al., 2023). On the basis of neighborhood relationships, Aslani and Seipel (2021) introduce locality-sensitive hashing (LSH) to gather instances near decision boundaries and eliminate nonessential ones. However, it retains many inners that may be more suitable for constructing a classifier for multi-classes problems rather than describing clusters with arbitrary shapes. Furthermore, Y. Ping et al. (2015) and Y. Ping et al. (2019) utilize the boundary to directly reformulate the dual problem. Despite achieving stable performance, the boundary selection becomes computationally expensive with a large value of N.