Models for Internal Clustering Validation Indexes Based on Hadoop-MapReduce

Models for Internal Clustering Validation Indexes Based on Hadoop-MapReduce

Soumeya Zerabi, Souham Meshoul, Samia Chikhi Boucherkha
Copyright: © 2020 |Pages: 26
DOI: 10.4018/IJDST.2020070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cluster validation aims to both evaluate the results of clustering algorithms and predict the number of clusters. It is usually achieved using several indexes. Traditional internal clustering validation indexes (CVIs) are mainly based in computing pairwise distances which results in a quadratic complexity of the related algorithms. The existing CVIs cannot handle large data sets properly and need to be revisited to take account of the ever-increasing data set volume. Therefore, design of parallel and distributed solutions to implement these indexes is required. To cope with this issue, the authors propose two parallel and distributed models for internal CVIs namely for Silhouette and Dunn indexes using MapReduce framework under Hadoop. The proposed models termed as MR_Silhouette and MR_Dunn have been tested to solve both the issue of evaluating the clustering results and identifying the optimal number of clusters. The results of experimental study are very promising and show that the proposed parallel and distributed models achieve the expected tasks successfully.
Article Preview
Top

Introduction

One of the most fundamental techniques of machine learning is clustering which is used for discovering groups (called clusters) in a given set of unlabeled data points where points within each cluster are similar to each other and points from different clusters are dissimilar. The similarity is defined by a distance norm like Euclidean distance, Manhattan distance, Minkowski distance, etc., depending on the type of point attributes.

Clustering algorithms are used in many fields including image analysis (Thamilselvan & Sathiaseelan, 2018), genomic (Reddy, Ganesh, & Rao, 2017), wireless sensor networks (Ramluckun & Bassoo, 2018), climatology (Falquina & Gallardo, 2017), Internet of Thinks (Yu, 2019) and the applications of clustering continue growing. There exist several clustering techniques that differ by the type of the input data they handle, the clustering criterion defining the similarity between data points (Halkidi, Batistakis, & Vazirgiannis, 2001) and the form of the clusters they deal with. Clustering techniques contains a variety of algorithms (Badase, Deshbhratar, & Bhagat, 2015) and can be classified into the following classes:

  • Partition (or representative) clustering techniques include K-means and CLARANS algorithms

  • Hierarchical clustering techniques include BRICH (Zhang, Ramakrishnan, & Livny, 1996) and CURE algorithms

  • Density based clustering techniques include DBSCAN (Ester et al. 1996) and OPTICS algorithms

  • Grid based clustering techniques include STING algorithm (Wang, Yang, & Muntz, 1997)

A comprehensive review of the state-of-the-art of clustering techniques can be found in (Nerurkar, Shirke, Chandane, & Bhirud, 2018).

Procedures that evaluate the quality and goodness of clustering algorithms results are known under the term “cluster validity indexes (CVIs)” and are broadly divided into three categories (Mohammed & Jr, 2014) namely external, internal and relative validity indexes.

External validity indexes require that the ground truth is known a prior. The most widely used indexes in this category are: Purity, F-measure, Conditional Entropy, Jaccard coefficient, Rand statistic and others. In practice, ground truth is often not available. Therefore, internal validation indexes are used to evaluate the correctness of clustering without any additional information and are used when there is no external information available. Examples of these indexes are: Dunn index (Dunn, 1974), Silhouette index (Rousseeuw, 1987), Davies–Bouldin index (Davies & Bouldin, 1979) and others. In another hand, relative validation indexes can serve as a way to test and validate clustering results by comparing them to other clustering results resulting by the same algorithm but with different parameter values.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing