Article Preview
TopIntroduction
One of the most fundamental techniques of machine learning is clustering which is used for discovering groups (called clusters) in a given set of unlabeled data points where points within each cluster are similar to each other and points from different clusters are dissimilar. The similarity is defined by a distance norm like Euclidean distance, Manhattan distance, Minkowski distance, etc., depending on the type of point attributes.
Clustering algorithms are used in many fields including image analysis (Thamilselvan & Sathiaseelan, 2018), genomic (Reddy, Ganesh, & Rao, 2017), wireless sensor networks (Ramluckun & Bassoo, 2018), climatology (Falquina & Gallardo, 2017), Internet of Thinks (Yu, 2019) and the applications of clustering continue growing. There exist several clustering techniques that differ by the type of the input data they handle, the clustering criterion defining the similarity between data points (Halkidi, Batistakis, & Vazirgiannis, 2001) and the form of the clusters they deal with. Clustering techniques contains a variety of algorithms (Badase, Deshbhratar, & Bhagat, 2015) and can be classified into the following classes:
- •
Partition (or representative) clustering techniques include K-means and CLARANS algorithms
- •
Hierarchical clustering techniques include BRICH (Zhang, Ramakrishnan, & Livny, 1996) and CURE algorithms
- •
Density based clustering techniques include DBSCAN (Ester et al. 1996) and OPTICS algorithms
- •
Grid based clustering techniques include STING algorithm (Wang, Yang, & Muntz, 1997)
A comprehensive review of the state-of-the-art of clustering techniques can be found in (Nerurkar, Shirke, Chandane, & Bhirud, 2018).
Procedures that evaluate the quality and goodness of clustering algorithms results are known under the term “cluster validity indexes (CVIs)” and are broadly divided into three categories (Mohammed & Jr, 2014) namely external, internal and relative validity indexes.
External validity indexes require that the ground truth is known a prior. The most widely used indexes in this category are: Purity, F-measure, Conditional Entropy, Jaccard coefficient, Rand statistic and others. In practice, ground truth is often not available. Therefore, internal validation indexes are used to evaluate the correctness of clustering without any additional information and are used when there is no external information available. Examples of these indexes are: Dunn index (Dunn, 1974), Silhouette index (Rousseeuw, 1987), Davies–Bouldin index (Davies & Bouldin, 1979) and others. In another hand, relative validation indexes can serve as a way to test and validate clustering results by comparing them to other clustering results resulting by the same algorithm but with different parameter values.