Segmentation of Ill-Defined Objects by Convoluting Context Window of Each Pixel with a Non-Parametric Function

Segmentation of Ill-Defined Objects by Convoluting Context Window of Each Pixel with a Non-Parametric Function

Upendra Kumar, Tapobrata Lahiri
Copyright: © 2013 |Pages: 9
DOI: 10.4018/ijcvip.2013010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Taking cue from the fact of Context-dependence in Human Cognition process, in this work an image segmentation method is introduced where each pixel are classified into its object-class depending on properties of its neighboring pixels within a context-window-frame surrounding it. In brief methodological steps, the convolution of array obtained as intensities of pixels of a context window is done with weights obtained through a specific architecture of Artificial Neural Network after training. The result of convolution is utilized to define class of an object. The training set of pixels is selected judiciously considering exhaustive variety of context-types which includes pixels inside, outside, boundary of objects. This work also gives a novel approach for quantitative assessment of segmentation-efficiency for a segmentation process. Also the use of context-window appears to improve the segmentation process because of equivalence of this approach with those which use a combination of local texture and color for segmentation.
Article Preview
Top

Introduction

It is a common and existing knowledge that to recognize an object of interest, we should first segment out the object from the background to reject noisy or redundant information existing in the object from the decision making process (Kumar et al., 2007; Bukharia et al., 2011; Sowmya & Sheelarani, 2009). The partitioning of gray scale image into non-related group of regions based on brightness and texture of the regions is well discussed by (Malik et al., 2001; Yuan & Tan, 2000; Sasirekha & Chandra, 2012). The protocol is generally referred as face-detection in the paradigm of face as our object of interest. In this context, we found the existing techniques are not efficient because of their lack of tolerance to minute change in local texture and color present surrounding the face pixels. Considering this situation, we have designed and applied a simple context window description of texture for each pixel to fix their class to either object, i.e., face or background.

For better identification or classification of face within an image, factors that are important to consider are background and secondly the optical properties like, luminance, contrast, color, etc. Even for a typical robust image capturing system where those factors are kept almost constant to support better segmentation of object, there exists occurrence of slight changes of these factors indicating requirement of a segmentation technique that is tolerant to these changes (Biederman, 1987). It is also quite logical to accept that for a case of varying background, segmented faces will certainly give better result for biometric evaluation. For example features extracted from the whole image (i.e., including background) through filtering method and morphological approach gave incorrect classification results because of incorrect representation of the extracted feature or information content (Banham & Katsaggelos, 1997; Bukharia et al., 2011). The severity of this problem gets increased in the noisy environment, because of incorrect classification using noisy data. Neural network has been proven a good classifier for the data with noisy content provided that feature of object has been extracted correctly (Leondes, 1998; Egmont-Petersen, 2002). Many image segmentation work has been done using either pixel-based or context window-based classification (Ziemke, 1996; Galleguillos & Belongie,2010).The approach using pixel-based classification is very time taking while the efficiency of the approach using context window based classification depends on the accuracy of the context window chosen. This approach has been used in document image segmentation (Bukhari et al., 2010). Spatial context concept gives more important information for better segmentation of object of interest where object is represented by set of pixels5. It has been observed that the apparent brightness of any object is not completely based on its luminance but also based on the context in which it is embedded (Vergeer & Vanlier, 2011; Mizokawa, 2011; Lu & Hager, 2007) indicating the inclusion of contextual information in recognizing any object may give better recognizing efficiency. Many sources of context information have been discussed and proposed by (Divvala et al., 2009). A learning based approach (k-Nearest Neighbor Classifier) has been shown and applied in segmenting out the block of postal which contains the address of recipient (Legal-Ayala et al., 2008). This work is interesting in the sense that context window of pixels was considered to define its class and statistical feature parameter was extracted from the context window of the pixel thus to get its class (Legal-Ayala et al., 2008). However, because of use of statistical parameters as feature, the approach may not work well where the noise level of the image is negligible due to good image capturing system. Moreover, in this work, the feature-class correspondence was done through manual segmentation of images which has been done by experts, thereby leaving a chance of addition of huge subjective error along the full length of the object boundary. Also, the learning of object as class has been done by applying kNN.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing