Neighboring-Aware Hierarchical Clustering: A New Algorithm and Extensive Evaluation

Neighboring-Aware Hierarchical Clustering: A New Algorithm and Extensive Evaluation

Ali A. Amer, Muna Al-Razgan, Hassan I. Abdalla, Mahfoudh Al-Asaly, Taha Alfakih, Muneer Al-Hammadi
Copyright: © 2024 |Pages: 24
DOI: 10.4018/IJSWIS.346377
Article PDF Download
Open access articles are freely available for download

Abstract

In this work, a simple yet robust neighboring-aware hierarchical-based clustering approach (NHC) is developed. NHC employs its dynamic technique to take into account the surroundings of each point when clustering, making it extremely competitive. NHC offers a straightforward design and reliable clustering. It comprises two key techniques, namely, neighboring- aware and filtering and merging. While the proposed neighboring-aware technique helps find the most coherent clusters, filtering and merging help reach the desired number of clusters during the clustering process. The NHC's performance, which includes all evaluation metrics and run time, has been thoroughly tested against nine clustering rivals using four similarity measures on several real-world numerical and textual datasets. The evaluation is done in two phases. First, we compare NHC to three common clustering methods and show its efficacy through empirical analysis. Second, a comparison with six relevant, contemporary competitors highlights NHC's extremely competitive performance.
Article Preview
Top

Introduction

Clustering is an unsupervised method that divides the unlabeled data points into various groups based on distance metrics (or similarity measures). It is frequently utilized in a variety of fields, including knowledge discovery through machine learning (ML), information retrieval (IR), and data mining. By grouping related data points (or semantically important groupings of points or documents) into meaningful clusters and dispersing the dissimilar points over numerous clusters, clustering provides intuitive navigation and browsing options. Several scientific applications have used a variety of clustering algorithms that were developed over the past 40 years, including mixed data (Kuwil et al., 2019), database system design (Abdalla et al., 2023; Fernández & Gómez, 2021), recommendation systems (Akilandeswari et al., 2022), data and text classification (Gilpin & Davidson, 2017; Hussain & Haris, 2019; Salem et al., 2018; Steinbach et al., 2000), high-dimensional data space (Chander et al., 2022), indexing (Zhu & Ma, 2018), and word embedding-based text clustering (Gong et al., 2018). As a general rule, while clustering data points (or documents) are divided into k clusters, related points (or documents) are placed in the same cluster, and different data points are placed in different clusters. In actuality, the complexity inherent in the clustering of documents, especially during the past 30 years, has caught the researchers’ attention. Consequently, the search for semantically meaningful groups of documents is still in full gear for scholars.

Two clustering categories are widely popular, namely, partitional and hierarchical, which have achieved significant results in applications across multiple domains. Overall, partitional clustering is more dynamic and efficient than hierarchical clustering. Indeed, in partitional clustering, the points of data can be migrated from one cluster to another smoothly. Moreover, they can be linked with knowledge connected with clusters’ size and shape via leveraging distance measurements with suitable prototypes (Kuwil et al., 2019). However, the demerits of partitional clustering are: (1) Most algorithms use optimization techniques to solve their problems with initialization and the number of clusters; (2) their iterative mechanism makes them susceptible to local minima and prone to cluster initialization, leading to the failure to find the best or optimal solutions; (3) they are highly sensitive to both noise and outliers, and determining the k clusters has long been expressed as challenging tasks; (4) their dimensionality negatively affects the efficacy of partitional category. Further, since performance is heavily dependent on the initial centroids and the number of clusters, the majority of them typically experience performance fluctuations (Gong et al., 2018; Kuwil et al., 2019). The k-means algorithm is one of the most used clustering algorithms (Abdalla et al., 2023; Arthur & Vassilvitskii, 2007). The k parameter in the k-means algorithm must be established as the number of clusters in the first phase. The k-means will then select k randomly chosen numbers as centroids using the k value. The initial data point for each cluster are these centroids, which are employed in the first phase. The computations to optimize the centroids’ placements are carried out repeatedly until the centroids are stabilized by iteratively running k-means. The ultimate clustering solutions will then be produced using the stabilized centroids.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing