Effective and Efficient Browsing of Large Image Databases

Effective and Efficient Browsing of Large Image Databases

Gerald Schaefer
DOI: 10.4018/978-1-59904-879-6.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As image databases are growing, efficient and effective methods for managing such large collections are highly sought after. Content-based approaches have shown large potential in this area as they do not require textual annotation of images. However, while for image databases the query-by-example concept is at the moment the most commonly adopted retrieval method, it is only of limited practical use. Techniques which allow human-centred navigation and visualization of complete image collections therefore provide an interesting alternative. In this chapter we present an effective and efficient approach for user-centred navigation of large image databases. Image thumbnails are projected onto a spherical surface so that images that are visually similar are located close to each other in the visualization space. To avoid overlapping and occlusion effects images are placed on a regular grid structure while large databases are handled through a clustering technique paired with a hierarchical tree structure which allows for intuitive real-time browsing experience.
Chapter Preview
Top

Introduction

With sizes of image databases ever growing, these collections need to be managed not only for professional but also increasingly for private use. Clearly one of the decisive parts of this is the ability to effectively retrieve those images a user is looking for. As the traditional approach of annotating images with a textual description and/or keywords is only feasible for smaller image collections, it is an automatic, content-based image retrieval (CBIR) approach that is required to query larger datasets (Smeulders, Worring, Santini, Gupta, & Jain, 2000). Unfortunately the current state-of-the-art of computer vision is still far from being able to correctly interpret captured scenes and “see” the objects they contain. Nevertheless, much research has focussed on content-based techniques for retrieving images, much of which employs low level visual features such as color or texture to judge the similarity between images (Smeulders et al., 2000).

The most common form of image retrieval systems is based on the query-by-example concept (Jain, 1996) where the user provides an image and the system retrieves those images that are deemed to be most similar to the query. As this approach requires an actual image as input, it is only of limited practical use for various applications. An alternative is to employ the query-by-navigation paradigm where the user has the ability to visualize and browse interactively an entire image collection via a graphical user interface and, through a series of operations, zoom in on those images that are of interest. Various authors have recently introduced such intuitive navigation interfaces (Ruszala & Schaefer, 2004). The basic idea behind most of these is to place thumbnails of visually similar images, as established by the calculation of image similarity metrics based on features derived from image content, also close to each other on the visualization screen, a principle that has been shown to decrease the time it takes to localize images (Rodden, Basalaj, Sinclair, & Wood, 1999). One of the first approaches was the application of multidimensional scaling (MDS) (Kruskal & Wish, 1978) to project images being represented by high dimensional feature vectors to a 2-dimensional visualization plane (Rubner, Guibas, & Tomasi, 1997). In the PicSOM system (Laaksonen, Koskela, Laakkso, & Oja, 2000), tree-structured self organizing maps are employed to provide both image browsing and retrieval capabilities. Krishnamachari and Abdel-Mottaleb (1999) employ a hierarchical tree to cluster images of similar concepts, while image database navigation on a hue sphere is proposed by Schaefer and Ruszala (2005). The application of virtual reality ideas and equipment to provide the user with an interactive browsing experience was introduced by Nakazato and Huang (2001).

While the application of techniques such as MDS provides an intuitive and powerful tool for browsing image collections, it is only of limited use for medium-sized and large image collections. For such databases it provides a relatively poor representation as many images are occluded, either fully or partially, by other images with similar feature vectors. In addition, empty spaces are common in areas where no images fall, creating an unbalanced representation on screen. Furthermore, some techniques (e.g., MDS) are computationally expensive and hence not suitable for real-time browsing environments.

In this chapter we present an image database navigation method that addresses these issues. Based on a spherical visualization space (Schaefer & Ruszala, 2005), a navigation interface for image collections is created. Yet in contrast to the previous approaches, this is being done in a hierarchical manner which can cope also with large image datasets and has the advantage that all levels of the hierarchy can be precomputed, thus allowing real-time browsing of the image database. In addition, images are laid out on a regular grid structure which avoids any unwanted overlapping effect between images. Furthermore, the visualization space is better utilized by branching out images into otherwise unoccupied parts of the screen. The proposed method hence provides an effective, intuitive, and efficient interface for image database navigation, as is demonstrated on a medium-sized image collection.

Key Terms in this Chapter

Image Database Navigation: The browsing of a complete image collection based, for example, on CBIR concepts.

Query-by-Example Retrieval: Retrieval paradigm in which a query is provided by the user and the system retrieves instances similar to the query.

Hue: The attribute of a visual sensation according to which an area appears to be similar to red, green, yellow, blue, or a combination of two of them.

Image Similarity Metric: Quantitative measure whereby the features of two images are compared in order to provide a judgement related to the visual similarity of the images.

Content-Based Image Retrieval (CBIR): Retrieval of images based not on keywords or annotations but on features extracted directly from the image data.

Complete Chapter List

Search this Book:
Reset