Channel Semantic Enhancement-Based Emotional Recognition Method Using SCLE-2D-CNN

Channel Semantic Enhancement-Based Emotional Recognition Method Using SCLE-2D-CNN

Dan Fu, Weisi Yang, Li Pan
Copyright: © 2024 |Pages: 22
DOI: 10.4018/IJSWIS.337286
Article PDF Download
Open access articles are freely available for download

Abstract

The existing EEG emotion classification methods have some problems, such as insufficient emotion representation and lack of targeted channel enhancement module due to feature redundancy. To this end, a novel EEG emotion recognition method (SCLE-2D-CNN) combining scaled convolutional layer (SCLs), enhanced channel module and two-dimensional convolutional neural network (2D-CNN) is proposed. Firstly, the time-frequency features of multi-channel EEG emotional signals were extracted by stacking scl layer by layer. Secondly, channel enhancement module is used to reassign different importance to all EEG physical channels. Finally, 2D-CNN was used to obtain deep local spatiotemporal features and complete emotion classification. The experimental results show that the accuracy of SEED data set and F1 are 98.09% and 97.00%, respectively, and the binary classification accuracy of DEAP data set is 98.06% and 96.83%, respectively, which are superior to other comparison methods. The proposed method has a certain application prospect in the recognition of human mental state.
Article Preview
Top

Introduction

Emotions can represent the current psychological and physiological states of humans, and they play a vital role in cognition, decision-making, and communication. Stable and optimistic emotions are important indicators of mental and physical health. For example, positive emotions such as joy, excitement, surprise, satisfaction, love, and friendship can help stimulate healthy psychology, enhancing happiness, satisfaction, and self-confidence (Li et al., 2022b). On the contrary, negative emotions, including anger, sadness, anxiety, fear, shame, and disgust can easily lead to negative psychology and unhealthy physiology. With the rapid development of artificial intelligence (Tan et al., 2022; Jiao et al., 2022), big data (Thirumalaisamy et al., 2022), cloud computing (Vijayakumar et al., 2022; Dwivedi, 2022), fog computing (Thoumi & Haraty, 2022), the Internet of Things (Chamra & Harmanani, 2020), smart homes (Madhu et al., 2022; Guebli & Belkhir, 2021), intelligent communication (Samir et al., 2020) and other fields, the application scope of human emotion recognition is becoming increasingly widespread. Therefore, conveniently, effectively, and accurately recognizing human emotions is significantly important for promoting the development of new eras such as artificial intelligence, Web 3.0, and the metaverse (Kouti et al., 2022).

Usually, human facial expressions, speech signals, or posture and gait are used for emotion recognition. However, these signals are easily influenced by human subjective characteristics and may not reflect the true emotional state (Zhao et al., 2023). Emotion recognition using electroencephalography (EEG) signals can avoid human camouflage of emotions and can provide a more accurate analysis of emotions by detecting electrophysiological signals (Islam et al. 2021; Li et al., 2019). Single channel approach (Dan, 2021) and multi-channel approach (Jie et al., 2022) can be distinguished according to the number of channels present in EEG signals. The main advantage of a single channel is high efficiency, while the main advantage of multiple channels is multi-dimensional, comprehensive, and high recognition rate. From the perspective of feature extraction, it can be divided into manual feature extraction methods and deep learning-based automatic feature extraction methods (Ghosh et al., 2021; Appati et al., 2021). The manual feature extraction approach has rich prior knowledge and can fully utilize non-stationary non-linear EEG signals to achieve feature extraction. The deep learning-based approach can extract various types of information from EEG signals and usually has a higher recognition rate compared to the manual extraction approach (Tan et al., 2021a; Zali-Vargahan et al., 2023).

However, whether single-channel or multi-channel approaches are adopted, many existing approaches still have some potential problems. The multi-channel EEG data is large and contains a lot of redundant information, which is not highly relevant to the emotion recognition task of EEG, resulting in insufficient representation of emotion information. Since the importance of each channel in the original EEG signal is the same, it is difficult to effectively enhance the EEG signal of different channels to improve the recognition performance.

To resolve the above-mentioned issues, this paper proposes a method for emotion recognition from EEG signals (SCLE-2D-CNN) that incorporates SCL, enhanced channel modules, and 2D-CNN. The proposed method achieves accurate emotion recognition through multi-channel time-frequency feature extraction and enhancement of EEG signal channels. The main innovation points are:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing