A Texture Features-Based Robust Facial Expression Recognition

A Texture Features-Based Robust Facial Expression Recognition

Jayati Krishna Goswami, Sunita Jalal, Chetan Singh Negi, Anand Singh Jalal
Copyright: © 2022 |Pages: 15
DOI: 10.4018/IJCVIP.2022010103
Article PDF Download
Open access articles are freely available for download

Abstract

Facial expression plays an important role in communicating emotions. In this paper, a robust method for recognizing facial expressions is proposed using the combination of appearance features. Traditionally, appearance features mainly divide any face image into regular matrices for the computation of facial expression recognition. However, in this paper, we have computed appearance features in specific regions by extracting facial components such as eyes, nose, mouth, and forehead, etc. The proposed approach mainly has five stages to detect facial expression viz. face detection and regions of interest extraction, feature extraction, pattern analysis using a local descriptor, the fusion of appearance features and finally classification using a Multiclass Support Vector Machine (MSVM). Results of the proposed method are compared with the earlier holistic representations for recognizing facial expressions, and it is found that the proposed method outperforms state-of-the-art methods.
Article Preview
Top

1. Introduction

Over the last decades, researchers performed various analysis related to facial expression recognition in the field of computer vision and pattern recognition. Generally, there are three types of communication - verbal, non-verbal and para verbal. In general, in any conversation, verbal, non-verbal and para verbal contribute 7%, 55%, and 38%, respectively. Facial expression plays an essential role in non-verbal communication. Face expression also plays a significant role in between human to human or human to machine communication. Muscle movement under facial skin causes facial expression, and these movements change according to internal or external emotions. Eye contact establishes participation, conversations and generates a bond with others.

In the early 70s, a famous psychologist Paul Ekman (Ekman & Friesen, 1978) and his associates performed research on facial expressions and proposed six primary facial expressions. Facial expressions comprise the laugh, sad, anger, disgust, surprise, and fear. The face with the curve-shaped eye expresses happiness. In the sad expression, eyebrows' inner corners are drawn in and triangulation of skin underneath the eyebrows and scowl. An angry face is related to the obnoxious and bothersome states. The anger facial expression is expressed with compressed eyebrows, thin and drawn eyelids. In a disgusted face, the upper eyelid is lifted up, and eyebrows are pull-down and wrinkled nose. In a surprised face, the skin below the eyebrow is drawn, widening of eyes and mouth gaping which can be recognized easily. The fear expression can be identified on a face having a wrinkled forehead between the eyebrows and tensed lips and opened mouth (Revina and Emmanuel, 2018).

Fig. 1 demonstrate the general Facial Expression Recognition(FER) framework. In general, a FER system comprises two stages - extraction of features and classification of the facial expression. Extraction of the features can be performed using two methods – geometric-based methods and appearance-based methods (Dols & Russell, 2017) (Tian et al., 2011) (Chanti et al., 2017). In the geometric-based method, facial landmarks like eyes, forehead, nose, and mouth are located to describe facial geometry. Although, facial analysis applications (Viola & Jones, 2004) (Bartlett et al., 2002) (Bartlett et al., 2005) based on the geometric feature-based methods yield more approving outcomes, however, it has some issues in computing in various situations.

Figure 1.

Generic framework of FER

IJCVIP.2022010103.f01

On the contrary, in the appearance-based method, filters are applied over an image holistically or on the regions specifically to discriminate variations in facial expressions (Liang et al., 2016). Even though Local Binary Pattern (reference) is adopted frequently for extracting texture-based features and captures the local region characteristics from images in which facial expression recognition is performed. However, existing facial expression recognition methods subject to LBP is not capable of solving the issues of local illuminations that make recognition task difficult due to uncertainty in some facial regions.

The block-level processing of the face images plays a key role in finding details and boundary by placing the center on landmarks. Also, the dimensionality of feature extraction is increased, and it is inconvenient to specify every characteristic of image texture. Guo et al. (2017) introduced a recognition technique for facial expression called "K-ELBP" in which K-L transform (KLT) is merged with Extended Local Binary Pattern (ELBP). The feature values are extracted using ELBP by applying it on expression images. The feature vector is computed by reducing the dimensionality of the feature matrix using a covariance matrix transform. Finally, for the classification of the facial expression, a Support Vector Machine (SVM) is utilized.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing