Facial Emotion Recognition Using Osmotic Computing

Facial Emotion Recognition Using Osmotic Computing

P. Aurchana, R. Indhumathi, G. Revathy, A. Ramalingam
Copyright: © 2024 |Pages: 14
DOI: 10.4018/979-8-3693-1694-8.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Emotion recognition refers to the process of identifying the emotions expressed by an individual, typically through their facial expressions, speech, body language, and sometimes physiological signals like heart rate or skin conductance. In this chapter, facial expression is used to recognise. Emotions like happiness, sadness, anger, fear, surprise, and disgust are typically recognized. This chapter aims at developing a real-time approach to classification of facial emotions such as happy, normal, yawn, and sleep in a real-time context. For this, images are captured using sensors and stored in a cloud storage bucket in which the processing is done. The facial emotions are identified through the use of Haar cascade classifiers. The histogram-oriented gradients features are extracted in the detected facial emotion images, and the extracted features are classified by using machine learning models support vector machine and k-nearest neighbour classifiers as happy, normal, yawn, and sleep. The suggested system outperforms other current systems when tested with real-time datasets.
Chapter Preview
Top

1. Introduction

A Latin verb “movere,” which meaning to provoke, arouse, disturb, or elicit a response, is where the term “emotion” begins. According to psychology, emotion is a perplexing state of feeling that alters both the body and the mind and has an effect on behaviour and cognition. Emotions are made up of: cognition, feeling, and behaviour. Cognitive component primarily functions to shape our assessment of a given situation, leading us to experience emotions in various ways or not at all. In our everyday existence, we contemplate and take into account. When someone is stimulated, their emotions alter quickly and visibly. Vocal, facial, postural, and gestural reactions are all part of the behavioral component (Social Work Education BD, n.d.). Paul Eckman, a psychologist, identified six emotions that he felt all human societies shared in the 1970s. He named the emotions of happiness, sorrow, disgust, fear, surprise, and rage. He then included pride, dishonour, shame, and enthusiasm in his list of extra main emotions (Very Well Mind, n.d.). these feelings had been used as research subjects by numerous scholars. Mixed feelings are also employed in study. Artificial intelligence is given the ability to recognize emotions. The emotions are applied to other computer vision applications to generate interactive apps (Indhumathi & Geetha, 2019). Fig 1 depicts the range of emotions collected for this suggested project.

Figure 1.

Emotions for the proposed work

979-8-3693-1694-8.ch001.f01

In the proposed work the images the emotion are captured and stored in cloud bucket storage and the training is done where the features are extracted using HOG, the derived features are fed into KNN and SVM which classes the emotions into four classes mainly happy, irritate, sleep and yawn.

Figure 2.

Overall block diagram of the proposed work

979-8-3693-1694-8.ch001.f02
Top

2. Literature Survey

In Sun & Li, (2017), deep learning is one kind of which the proposed convolutional neural network (CNN) is. Three datasets, CK, JAFFE, and NVIE, were chosen in order to train and verify the model. The 10-overlap Cross Validation technique was employed. In T et al. (2015), E-Learning Framework for Support Vector Machine-Performed Multiuser Face Detection-Based E-Learning. A multi-user e-learning framework to enhance learning with webcams and microphones (Bahreini et al., 2014). Provides quick actualresponse based on the student's facial and verbal expression. They presented a framework designed to detect and monitor student emotions in an online learning environment (Krithika & Lakshmi Priya, 2016). This system too offers a real-time input component to improve e-learning assets for moved forward.By recognizing eye and head movements, it becomes possible to gauge the students' level of concentration.For a remote learning platform, Salma Boumiza and her coworkers (Boumiza & Bekiarski, 2017) developed an automated tutor that makes use of both facial recognition and emotion identification technologies.SL Happy and others. Happy (2013) introduced a non-invasive, self-contained model designed for the intelligent assessment of emotions, alertness states, and even the appropriateness of feedback based on the user's age. This system employs unobtrusive visual cues to assess the user's emotional and alertness states, providing appropriate feedback based on the recognized cognitive state. It takes into account facial expressions, visual parameters, body postures, and gestures to make these assessments.

For emotion detection in facial images, an efficient Deep Learning (DL) neural network (DCNN) using TensorFlow (TL) with pipeline adjustment strategy has been proposed.Proposed method has a very high It has detection accuracy (Akhand et al., 2021). In the current study, experiments conducted in a general environment without a pre-trained DCNN model resulted in misclassification along with some confusing face images (mostly profiles).

Complete Chapter List

Search this Book:
Reset