Article Preview
TopIntroduction
Clustering is an unsupervised method that divides the unlabeled data points into various groups based on distance metrics (or similarity measures). It is frequently utilized in a variety of fields, including knowledge discovery through machine learning (ML), information retrieval (IR), and data mining. By grouping related data points (or semantically important groupings of points or documents) into meaningful clusters and dispersing the dissimilar points over numerous clusters, clustering provides intuitive navigation and browsing options. Several scientific applications have used a variety of clustering algorithms that were developed over the past 40 years, including mixed data (Kuwil et al., 2019), database system design (Abdalla et al., 2023; Fernández & Gómez, 2021), recommendation systems (Akilandeswari et al., 2022), data and text classification (Gilpin & Davidson, 2017; Hussain & Haris, 2019; Salem et al., 2018; Steinbach et al., 2000), high-dimensional data space (Chander et al., 2022), indexing (Zhu & Ma, 2018), and word embedding-based text clustering (Gong et al., 2018). As a general rule, while clustering data points (or documents) are divided into k clusters, related points (or documents) are placed in the same cluster, and different data points are placed in different clusters. In actuality, the complexity inherent in the clustering of documents, especially during the past 30 years, has caught the researchers’ attention. Consequently, the search for semantically meaningful groups of documents is still in full gear for scholars.
Two clustering categories are widely popular, namely, partitional and hierarchical, which have achieved significant results in applications across multiple domains. Overall, partitional clustering is more dynamic and efficient than hierarchical clustering. Indeed, in partitional clustering, the points of data can be migrated from one cluster to another smoothly. Moreover, they can be linked with knowledge connected with clusters’ size and shape via leveraging distance measurements with suitable prototypes (Kuwil et al., 2019). However, the demerits of partitional clustering are: (1) Most algorithms use optimization techniques to solve their problems with initialization and the number of clusters; (2) their iterative mechanism makes them susceptible to local minima and prone to cluster initialization, leading to the failure to find the best or optimal solutions; (3) they are highly sensitive to both noise and outliers, and determining the k clusters has long been expressed as challenging tasks; (4) their dimensionality negatively affects the efficacy of partitional category. Further, since performance is heavily dependent on the initial centroids and the number of clusters, the majority of them typically experience performance fluctuations (Gong et al., 2018; Kuwil et al., 2019). The k-means algorithm is one of the most used clustering algorithms (Abdalla et al., 2023; Arthur & Vassilvitskii, 2007). The k parameter in the k-means algorithm must be established as the number of clusters in the first phase. The k-means will then select k randomly chosen numbers as centroids using the k value. The initial data point for each cluster are these centroids, which are employed in the first phase. The computations to optimize the centroids’ placements are carried out repeatedly until the centroids are stabilized by iteratively running k-means. The ultimate clustering solutions will then be produced using the stabilized centroids.