Moving Object Classification Under Illumination Changes Using Binary Descriptors

Moving Object Classification Under Illumination Changes Using Binary Descriptors

S. Vasavi, Ayesha Farha Shaik, Phani chaitanya Krishna Sunkara
Copyright: © 2019 |Pages: 45
DOI: 10.4018/978-1-5225-5751-7.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Object recognition and classification has become important in a surveillance video situated at prominent areas such as airports, banks, military installations, etc. Outdoor environments are more challenging for moving object classification because of incomplete appearance details of moving objects due to illumination changes and large distance between the camera and moving objects. As such, there is a need to monitor and classify the moving objects by considering the challenges of video in the real time. Training the classifiers using feature-based approaches is easier and faster than pixel-based approaches in object classification. Extraction of a set of features from the object of interest is most important for classification. Viewpoint and sources of light illumination plays major role in the appearance of an object. Abrupt transitions are identified using Chi-square and corners are detected using Harris corner detection. Silhouettes are captured using background subtraction and feature extraction is done using ORB. k-NN classifier is used for classification.
Chapter Preview
Top

Introduction

Now-a-days objects such as human beings, animals, buildings, vehicles recognition and classification have become important in video surveillance system. Classifying moving objects with in a video sequence is challenging in outdoor environments because of incomplete appearance details, occlusions, dynamic background and illumination conditions. Recorded videos cannot be analyzed manually and as such requires a robust system that can monitor and classify the moving objects by considering the challenges of video in the real time.

Motivation

Visual surveillance and monitoring moving objects is required to identify suspicious activities at public places such as shopping malls, airports, railway stations, bus junctions, banks and military applications. Human operators monitoring manually for long durations is infeasible due to monotony and fatigue. As such, recorded videos are inspected when any suspicious event is notified. But this method only helps for recovery and does not avoid any unwanted events. “Intelligent” video surveillance systems can be used to identify various events and to notify concerned personal when any unwanted event is identified. Such a system requires algorithms that are fast, robust and reliable during various phases such as detection, tracking, classification etc. This can be done by implementing a fast and efficient technique to classify the objects that are present in the video in the real time.

Problem Statement

Basic video analysis operations such as object detection, classification and tracking require scanning the entire video. But this is a time consuming process and hence we require a method to detect and classify the objects that are present in the frames extracted from a real time video. Our earlier work on moving object classification is done by extracting texture, color and structural features, Zernike moments. It was noticed that efficiency of classification is dependent on how far an object is detected or objects appearance in the video frame. Object detection varies because of illumination changes. This chapter is on Moving object classification under illumination variations and abrupt changes by extracting features from key frames that are robust to illumination.

Complete Chapter List

Search this Book:
Reset