Article Preview
Top1. Introduction
The past few years have seen many advanced techniques evolving in Content-Based Image Retrieval (CBIR) systems. The rapid growth in the number of large scale image repositories in many domains has rough the need for efficient CBIR mechanisms. This envisages the need for fast and effective retrieval mechanisms in an efficient manner
There are two general methods for image matching, retrieval and recognition: intensity-based (color and texture) and geometry-based (shape). Intensity-based methods work with the intensity of the pixels and use the image itself as a feature descriptor. Geometry-based however, are feature-based methods that extract points from the image (usually edge or corner points) and reduce the problem to point set matching (Alvarado et al., 2002; Arandjelovic & Zisserman, 2010; Bernier & Landry, 2003; Chang & Kimia, 2011; Cronin, 2003; Cyr & Kimia, 2004; Geiger et al., 2003).
Shape retrieval methods are also classified into local, emphasizing local shape features or global, representing the shape as a whole. Global methods are usually easy to compute and robust against noise and shape distortions. Local methods are more complicated requiring sophisticated implementations and are slow, but are more suitable than global methods for recognizing occluded or partially visible objects. Another class of matching methods relies on symbolic entities extracted from shape contours (Keysers et al., 2007; Latecki et al., 2005; Ma & Latecki, 2011; Mokhtarian, 1995; Mokhtarian & Mackworth, 1992; Nelson & Selinger, 1998; Petrakis et al., 2002; Philbin et al., 2007; Ruberto, 2004; Sebastian et al., 2004; Trinh & Kimia, 2011;Wang et al., 2011;Yang et al., 2008; Zaeri et al., 2008). A review of shape representation methods can be found in Campbell and Flynn (2001) and Zhang and Lu (2004).
Several methods of image matching, indexing, and retrieval based on contour matching and recognition exist in the literature.
As our method is a feature-based approach using the outline shape, we will describe, in the following, the most known feature-based methods.
Authors in Berg et al. (2005) find a correspondence between the images as follows: (Given a model image P of an object, and a target image Q): Extract sparse oriented edge maps from each image, compute features based on geometric blur descriptors at locations with high edge energy, allow each of m feature points from P to potentially match any of the k most similar points in Q based on feature similarity and or proximity, construct cost matrices and approximate the resulting Binary Quadratic Optimization to obtain a correspondence, and, extend the correspondence on m points to a smooth map using a regularized thin plate spline. A key characteristic of the approach in Belongie et al. (2002) is the estimation of shape similarity and correspondences based on the shape context. It is a three-stage process: Solve the correspondence problem between two shapes, use the correspondences to estimate an aligning transform that maps one shape onto the other and then compute the distance between the two shapes as a sum of matching errors between corresponding points.