Estimating the Number of Clusters in High-Dimensional Large Datasets

Estimating the Number of Clusters in High-Dimensional Large Datasets

Xutong Zhu, Lingli Li
Copyright: © 2023 |Pages: 14
DOI: 10.4018/IJDWM.316142
Article PDF Download
Open access articles are freely available for download

Abstract

Clustering is a basic primer of exploratory tasks. In order to obtain valuable results, the parameters in the clustering algorithm, the number of clusters must be set appropriately. Existing methods for determining the number of clusters perform well on low-dimensional small datasets, but how to effectively determine the optimal number of clusters on large high-dimensional datasets is still a challenging problem. In this paper, the authors design a method for effectively estimating the optimal number of clusters on large-scale high-dimensional datasets that can overcome the shortcomings of existing estimation methods and accurately and quickly estimate the optimal number of clusters on large-scale high-dimensional datasets. Extensive experiments show that it (1) outperforms existing estimation methods in accuracy and efficiency, (2) generalizes across different datasets, and (3) is suitable for high-dimensional large datasets.
Article Preview
Top

Introduction

Clustering is the main task of exploratory data mining and a common technique for statistical data analysis. Clustering is widely used, in addition to data mining, pattern recognition, image processing (Bhatia & Deogun, 1998), computer vision (Frigui & Krishnapuram, 1999; Shi & Malik, 2000), and other fields; it is also used in fraud detection, market segmentation, and many other aspects. For example, in fraud detection, outliers in the cluster may predict the existence of fraud. In e-commerce, clustering can help e-commerce enterprises to understand their customers and provide them with more appropriate services by grouping customers with similar browsing behaviors and analyzing their common characteristics (Punj & Stewart, 1983). Clustering has been identified in these and more areas. Clustering methods are used to describe data, measure the similarity between different data points, and classify data points into different clusters.

The k-means algorithm is widely used for clustering due to its excellent performance in terms of runtime in practical applications (Chiang & Mirkin, 2010; Dunn, 1974). k-Means (Celebi et al., 2013; Jain, 2010; Wang et al., 2020) aims to partition a dataset with n entities into k (IJDWM.316142.m01) clusters, where each entity is assigned to the closest cluster to the centroid. When performing the k-means algorithm, the number of clusters must be specified. However, the optimal number of clusters in a dataset is often unknown and the k-value is difficult to estimate and give in advance. Setting an inappropriate number of clusters when performing clustering algorithms can lead to structural, grouping or compression errors. Most of the existing methods for estimating the optimal number of clusters focus on low-dimensional and small datasets.

The elbow method (Thorndike, 1953) is a commonly used and intuitive method to estimate the number of clusters in a dataset. It performs k-means algorithm for each k in a given search space, then uses the sum of squared errors (SSE) (Arbelaitz et al., 2013) to draw the elbow graph, and determines the optimal number of clusters by judging the inflection point of the elbow graph. As the number of cluster k increases, the compactness of each cluster gradually increases, so the SSE gradually decreases. When k reaches the optimal number of clusters, the reduction of SSE decreases sharply, and then tends to be flat as the value of k continues to increase. The SSE is the sum of the distances of all entities to their cluster centers. The elbow method uses the SSE as the measure of clustering effectiveness because it can be derived directly from the k-means algorithm. However, the elbow method also has obvious downsides. First, it has to perform the k-means algorithm for each value of k in the search space IJDWM.316142.m02 to get the SSE, which can be very computationally expensive for large datasets or high- dimensional datasets because it takes a lot of time to calculate the SSE before an estimate can be made. Secondly, the optimal number of clusters is not well determined if there is no obvious bend in the elbow graph or if there are multiple bends.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing