Improving the Performance of kNN in the MapReduce Framework Using Locality Sensitive Hashing

Improving the Performance of kNN in the MapReduce Framework Using Locality Sensitive Hashing

Sikha Bagui, Arup Kumar Mondal, Subhash Bagui
Copyright: © 2019 |Pages: 16
DOI: 10.4018/IJDST.2019100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this work the authors present a parallel k nearest neighbor (kNN) algorithm using locality sensitive hashing to preprocess the data before it is classified using kNN in Hadoop's MapReduce framework. This is compared with the sequential (conventional) implementation. Using locality sensitive hashing's similarity measure with kNN, the iterative procedure to classify a data object is performed within a hash bucket rather than the whole data set, greatly reducing the computation time needed for classification. Several experiments were run that showed that the parallel implementation performed better than the sequential implementation on very large datasets. The study also experimented with a few map and reduce side optimization features for the parallel implementation and presented some optimum map and reduce side parameters. Among the map side parameters, the block size and input split size were varied, and among the reduce side parameters, the number of planes were varied, and their effects were studied.
Article Preview
Top

The k Nearest Neighbor method was introduced in the 1950’s, but being labor intensive, it did not gain momentum until the 1960s when increased computing power became available, and today the kNN algorithm in high dimensional data has gained the much attention from researchers. (Bhatia, 2010) presents a survey of nearest neighbor techniques and enhancements that have been used in the recent past. Among these are Rank Nearest Neighbor (Bagui et al., 1993), Modified k nearest neighbor (Guo et al., 2003), Weighted k Nearest Neighbor (Bailey and Jain, 1998), to name a few. kNN has also been extended into the MapReduce environment (Kamdar and Rajani, 2015; Anchalia and Roy, 2014), but improving the efficiency of kNN remains a challenge.

Anchalia and Roy (2014) implemented the kNN algorithm in MapReduce environment. In their implementation, the Map function calculated the distance of the data vectors and the Reduce function performed the task of finding K neighbors and counting the majority vote. Kamdar and Rajani (2015) followed a similar approach, using the Map function and Reduce functions.

Hashing has also been used for nearest neighbor similarity searches by a few (Wang et al., 2014; Dasgupta et al., 2011; Broder, 1997). Broder (1997) used syntactic similarity, a sampling approach. Dasgupta et al. (2011) proposed two new Locality Sensitive Hashing approaches: one called ACHash and the other called DHHash. In ACHash, Hadamard Transform and Sparse Gaussian Vectors were used to reduce the dimensions and this was inspired by the Johnson Lindenstrauss transform. The other approach applied the Hadamard Transform, multiplied by unit Gaussian followed by another Hadamard transform. Wang et al. (2014) used hash table lookup and fast distance approximation.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing