Article Preview
TopIntroduction
Image feature extraction played a critical role when processing images. The main applications of image feature extraction were target identifying and tracking. It was necessary to select the proper local features in order to identify the target accurately (Lowe, 1999). Scale Invariant Feature Transform (SIFT) (Lowe, 2004) was used to extract local features. SIFT feature kept no changes regardless of scale zooming, rotation and lighting changes. But the application of SIFT in real time environment was limited due to the drawback of high complexity and time consuming. So many researchers are working on the improvement of SIFT algorithm.
Regarding SIFT self-improvement: Sinha (Sinha, Frahm, Pollefeys, & Genc, 2007) proposed a feature point extraction algorithm based on OpenGL, which had a ten times increase in speed. Zhang et al. (Zhang, Ma, Zhang, & Xu, 2014) employed the fuzzy K-means algorithm to improved SITF and improved RANSAC algorithm to eliminate false matching points after matching with PCA-SIFT and FKPCA-SIFT. From the experimental results, it can be seen that FKPCA-SIFT can keep the high matching accuracy for image. Herbert Bay proposed SURF (speedup robust features) in 2007 (Bay, Tuytelaars, & Van Gool, 2016) and replace the complex Gaussian filter in SIFT.
Regarding CPU hard drive speed up: One commonly algorithm used to speed up the image feature extraction was Multi-Core CPU. Q. Zhang designed two parallel SIFT algorithm and got a 6.4 times speed-up in 8-core CPU (Zhang, Chen, Zhang, & Xu, 2008). Feng et al. (2008) implemented the parallel SIFT algorithm on 16-cores machine/CPU and got 11 times speed up. Although both task gained certain speedup, there were gaps to the desired effect. Zhang (2009) enervated SURF of the layered parallel algorithm (P-SURF) on multi-cores CPU. This algorithm had a smaller parallel granularity. However, the issue was a larger synchronization overhead because there would be a process of data synchronism at the end of each phase. In order to decline the synchronization overhead and unbalanced load, Shigeto & Sakai (2011) improved parallel algorithm removed the previous synchronization process and paralleled the calculation of the integral image in previous integral image. Therefore, they gained 6 times speedup on 16-cores machine but still not so satisfied.
Regarding GPU hardware speedup: GPU could assist CPU to complete the calculation of high degree parallelism. SIFT and SURF had part of related work on GPU. Lindeberg (2012) presented a fast algorithm based on the CUDA of scale invariant features transform. But the result did not indicate a critical question: whether the data transfer time was included in the SIFT conduction time. Heymann et al. (2007) completed another version GPU SIFT and got 20 frames per second processing speed on QuadroFX340. These researches showed that GPU could accurate the image retrieval algorithm. But consider the strong calculation power of GPU/CPU, there is still a big gap to the best ideal result.
The research above improved the function of SIFT feature extraction on algorithm self-optimization or architecture Based on CPU/GPU. However, there was still a big gap between testing result and idealized value. With the developing of the internet, the number of images, videos and multimedia are increasing rapidly. It is showed (Tseng, Lin, & Hsu, 2010) there were 60 hours videos that were uploaded to the internet every minute on the video website like YouTube. Flickr and Facebook had 6 billion and 200 billion images respectively. With the exponential increasing of the size of data, in order to manage the data effectively the data should be processed in distributed environment (Cho, Cha, Gawecki, & Kuo, 2013).