A Study on Different Edge Detection Techniques in Digital Image Processing

A Study on Different Edge Detection Techniques in Digital Image Processing

Shouvik Chakraborty, Mousomi Roy, Sirshendu Hore
Copyright: © 2017 |Pages: 23
DOI: 10.4018/978-1-5225-1025-3.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Image segmentation is one of the fundamental problems in image processing. In digital image processing, there are many image segmentation techniques. One of the most important techniques is Edge detection techniques for natural image segmentation. Edge is a one of the basic feature of an image. Edge detection can be used as a fundamental tool for image segmentation. Edge detection methods transform original images into edge images benefits from the changes of grey tones in the image. The image edges include a good number of rich information that is very significant for obtaining the image characteristic by object recognition and analyzing the image. In a gray scale image, the edge is a local feature that, within a neighborhood, separates two regions, in each of which the gray level is more or less uniform with different values on the two sides of the edge. In this paper, the main objective is to study the theory of edge detection for image segmentation using various computing approaches.
Chapter Preview
Top

Introduction

A digital image is a numeric representation (normally binary) of a two-dimensional image. There are two types of digital images. Depending on whether the image resolution is fixed, it may be of vector or raster type. Without qualifications, the term “digital image” usually refers to raster images also called bitmap images. Digital images can be classified according to the number and nature of those values. A binary image is a digital image that has only two possible values (i.e. 0 or 1) for each pixel. Typically, the two colors used for a binary image are black and white though any two colors can be used. A greyscale is an image, in which the value of each pixel is a single value that carries color intensity information. Greyscale or so-called monochromatic images are composed exclusively of shades of gray, varying from black (lowest intensity) to white (highest intensity). Color image contains color information for each pixel. For visually acceptable results, it is necessary to provide three values (color channels, typically, Red, Green, and Blue in RGB format) for each pixel. The RGB color space is commonly used in computer displays, but other spaces such as HSV are often used in other contexts. A true-color image of a subject is an image that appears to the human eye just like the original, while a false-color image is an image that depicts a subject in colors that differ from reality. Range image represents the depth in the value of each pixel. It can be produced by range finder devices, such as a laser scanner, and makes a 3D volume by inserting third dimension (i.e., depth) into the 2D array of pixels. The processing of digital images is called digital image processing. The most common digital image processing tasks include:

  • Resizing,

  • Zooming,

  • Image segmentation,

  • Edge detection and

  • Color enhancement.

Image segmentation is one of the fundamental problems in digital image processing. Image segmentation is also an essential step in image analysis. In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). Segmentation separates an image into its component parts or objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is used to locate objects and boundaries (lines, curves, edges etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. Analytically, a digital image (composed of elements called pixels or picture elements), is defined as a two dimensional function f(x, y), where x and y are spatial (plane) co-ordinates. The value of ‘f’ at any pair of co-ordinate (x, y) is called the intensity ‘or’ gray level of the image at that point. Various image segmentation techniques are practiced in case of digital image processing.

One of the most natural technique used for image segmentation is edge detection techniques. Edge is a one of the basic feature of an image. Edge detection can be used as a primary tool for image segmentation. The image edges include a good number of information that is very important for obtaining the image characteristic by analyzing the image. In a grayscale image, the edge is a local feature that, within a neighborhood, separates two regions, in each of which the gray level is more or less uniform with different values on the two sides of the edge. In this paper, the main objective is to study the method of edge detection for image segmentation using different computing approaches. In edge detection techniques, we are basically using 2-D filters to detects the edges depending upon the level of the intensity difference of pixels and the level of discontinuity.

Complete Chapter List

Search this Book:
Reset