Article Preview
TopThe k Nearest Neighbor method was introduced in the 1950’s, but being labor intensive, it did not gain momentum until the 1960s when increased computing power became available, and today the kNN algorithm in high dimensional data has gained the much attention from researchers. (Bhatia, 2010) presents a survey of nearest neighbor techniques and enhancements that have been used in the recent past. Among these are Rank Nearest Neighbor (Bagui et al., 1993), Modified k nearest neighbor (Guo et al., 2003), Weighted k Nearest Neighbor (Bailey and Jain, 1998), to name a few. kNN has also been extended into the MapReduce environment (Kamdar and Rajani, 2015; Anchalia and Roy, 2014), but improving the efficiency of kNN remains a challenge.
Anchalia and Roy (2014) implemented the kNN algorithm in MapReduce environment. In their implementation, the Map function calculated the distance of the data vectors and the Reduce function performed the task of finding K neighbors and counting the majority vote. Kamdar and Rajani (2015) followed a similar approach, using the Map function and Reduce functions.
Hashing has also been used for nearest neighbor similarity searches by a few (Wang et al., 2014; Dasgupta et al., 2011; Broder, 1997). Broder (1997) used syntactic similarity, a sampling approach. Dasgupta et al. (2011) proposed two new Locality Sensitive Hashing approaches: one called ACHash and the other called DHHash. In ACHash, Hadamard Transform and Sparse Gaussian Vectors were used to reduce the dimensions and this was inspired by the Johnson Lindenstrauss transform. The other approach applied the Hadamard Transform, multiplied by unit Gaussian followed by another Hadamard transform. Wang et al. (2014) used hash table lookup and fast distance approximation.