Article Preview
TopIntroduction
The goal of clustering is to group a collection of objects together in a way that maximizes similarity within a cluster and dissimilarity between clusters (Ezugwu et al., 2022; Xu & Wunsch, 2005). It is a popular machine learning technique used in various fields, such as image processing, text mining, social science, and big data analysis, to mention just a few (Ezugwu et al., 2022; Krishnaswamy et al., 2023; Saha & Mukherjee, 2021; Chen et al., 2022; Liang & Chan, 2021; Hoi et al., 2022). Clustering can reveal the underlying structure of data, identify relationships between objects, and even detect outliers. Since it is an unsupervised learning task, clustering does not rely on prior data knowledge. Nevertheless, recent advances in machine learning have given rise to semisupervised clustering as a promising research area. Semisupervised clustering algorithms can leverage side information, such as labeled data or constraints, to enhance clustering quality and efficiency (Basu et al., 2008).
According to Jonschkowski et al. (2015), side information refers to additional data that are not part of the input or output space but can be helpful in the learning process. It is also used in other machine learning models, including support vector machines, multiview learning, and deep learning (Jonschkowski et al., 2015; Geoffrey et al., 2011). Generally speaking, side information can be expressed as constraints or labeled data, also known as seeds. In this paper, the following constraints are used to guide the clustering process for a given data set X = {x1, x2, …, xn}:
Must-Link: A Must-Link constraint between two data points xi and xj indicates that they must be placed in the same cluster.
Cannot-Link: A Cannot-Link constraint between two data points xi and xj indicates that they must be placed in separate clusters and should not be grouped together.
In Figure 1, a graphical illustration of the various types of side information that can be incorporated for data classification is presented. Over the last 20 years, several semisupervised clustering techniques have been developed in the literature. Typically, these methods are derived from unsupervised algorithms and aim to incorporate side information to improve clustering performance. Some of the significant techniques in this category include semisupervised K-means (Pelleg & Baras, 2007; Basu et al., 2002, 2004; Bilenko et al., 2004; Davidson & Ravi, 2005), semisupervised fuzzy clustering (Bensaid et al., 1996; Maraziotis, 2012; Abin, 2016; Grira et al., 2008), semisupervised spectral clustering that has been investigated in various research papers (Mavroeidis, 2010; Wang et al., 2014; Mavroeidis & Bingham, 2010), semisupervised density-based clustering (Böhm & Plant, 2008; Lelis & Sander, 2009; Ruiz et al., 2010; Vu et al., 2019), semisupervised hierarchical clustering that has been explored in Davidson and Ravi (2009), and semisupervised graph-based clustering (Kulis et al., 2009; Anand & Reddy, 2011; Vu, 2018), to mention a few.