Robust Real-Time Facial Expressions Tracking and Recognition

Robust Real-Time Facial Expressions Tracking and Recognition

J. Zraqou, W. Alkhadour, A. Al-Nu'aimi
Copyright: © 2014 |Pages: 11
DOI: 10.4018/ijtem.2014010108
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Enabling computer systems to track and recognize facial expressions and then infer emotions from about real time video is a challenging research topic. In this work, a real time approach to emotion recognition through facial expression in live video is introduced. Several automatic methods for face localization, facial feature tracker, and facial expression recognition are employed. A robust tracking is achieved by using a face mask to resolve mismatches that could be generated during the tracking process. Action units (AUs) are then built to recognize the facial expression in each frame. The main objective of this work is to provide a prediction ability of a human behavior such as a crime, angry or for being nervous.
Article Preview
Top

1. Introduction

Face detection is the former step for the facial point detection giving an important step in several tasks such as face recognition, gaze detection, and facial expression analysis (Valstar, Martinez, Binefa, & Pantic, 2010). Following the work proposed by Zraqou, Alkhadour, and Al-Nu'aimi (2013), the automated approach of recognizing and tracking facial expression through a video suffers from some mismatches generated by the tracking optical flow method. Also the employed method for face points extraction suffers from some drawbacks such as insufficient light focused on the face, object occlusion, and/or its error margin. This could lead to incorrect location points such as a point related to an eyebrow appears at the level of eye as shown in Figure 1, points of mouth at the level of points of nose, and etc.

Figure 1.

Automatic extraction of facial feature points. The marked points need to be resolved.

ijtem.2014010108.f01

In the general framework of recognizing and tracking facial expressions through a video sequence, the first frame of a given video that holds the first detected face is tracked within the rest sequence of images while analyzing the facial expressions in each frame. Action Coding System (FACS) (Ekman & Friesen, 1978) is the most widely used expression coding system in the behavioral sciences.

Finding fast detection algorithms associated with satisfied results and then optimizing the whole processes to be run efficiently are the key challenges. Several facial features detection algorithms and tracking are investigated based on time consuming.

An approach for real-time feature tracking customized for expression imitation was presented in Cao and Guo (2002). A system for tracking facial expressions and head pose with one camera and generating a face animation to imitate these expressions was explored. Gabor wavelet coefficient was used for tracking the facial feature points and a few tracking errors were corrected manually.

Image registration for a sequence of images to recognize facial AUs is required in the work proposed in Viola and Jones (2004). Subtle changes in facial behavior were analyzed to produce expressions. All utilized image sequences must have the faces in the same position and on the same scale. The problem of self-occlusion was tackled by recording motion history at multiple time intervals instead of recording it once for the entire image sequence. The AUs were recognized based on multilevel motion history images (MMHIs).

Three modules were included by Rowley, Baluja, and Kanade (1998), to extract feature information as follows: dense-flow extraction using a wavelet motion model, facial-feature tracking, and edge and line extraction. Classifying the feature information into FACS action units was achieved using Hidden Markov model (HHM).

Tracking feature points using an optical flow method was explored by Hsu, Abdel-Mottaleb, and Jain (2002). Combinations of action units for a sequence of images comprising 100 young adults were employed for testing. The brow and mouth regions were selected only for the analysis. Facial features selection was tracked by estimating optical flow. Facial Action Coding System (FACS) was used to achieve high validity.

A real-time automated system for the recognition of human facial expressions was presented by Anderson and McOwan (2006). Facial motion was used to characterize monochrome frontal views of facial expressions. Six emotions were organized such as happiness, sadness, disgust, surprise, fear, and anger. The spatial ratio template tracker algorithm was used to locate faces. A real-time implementation of a gradient model was determined based on the optical flow of the face. Then the facial velocity information over identified regions of the face was averaged to achieve the facial expression recognition.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2018)
Volume 7: 2 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 2 Issues (2013)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing