A Multimodal Sentiment Analysis Model for Graphic Texts Based on Deep Feature Interaction Networks

A Multimodal Sentiment Analysis Model for Graphic Texts Based on Deep Feature Interaction Networks

Wanjun Chang, Dongfang Zhang
Copyright: © 2024 |Pages: 19
DOI: 10.4018/IJACI.355192
Article PDF Download
Open access articles are freely available for download

Abstract

Due to the widespread adoption of social networks, image-text comments have become a prevalent mode of emotional expression compared to traditional text descriptions. However, there are currently two major challenges. The first is the question of how to extract rich representations effectively from both text and images, and the second is the question of how to extract cross-modal shared emotion features. This study proposes a multimodal sentiment analysis method based on a deep feature interaction network (DFINet). It leverages word-to-word graphs and deep attention interaction networks (DAIN) to learn text representations effectively from multiple subspaces. Additionally, it introduces a cross-modal attention interaction network to extract cross-modal shared emotion features efficiently. This approach helps alleviate the difficulties associated with acquiring image-text features and representing cross-modal shared emotion features. Experimental results on the Yelp dataset demonstrate the effectiveness of the DFINet method.
Article Preview
Top

Introduction

Due to the evolution of the internet and the maturity of smart devices, individuals increasingly tend to express their opinions on various social media platforms. The application of sentiment analysis technology enables the automatic extraction of general emotional trends and in-depth analysis of inherent emotional nuances from subjective text. This not only aids governments in gaining a deeper understanding of online sentiment trends during critical events but also assists businesses in comprehensively grasping user interests and trends. It also makes it easier for consumers to evaluate the merits and demerits of products. Consequently, sentiment analysis tasks have significant research importance.

Sentiment analysis is the process of employing efficient algorithms to discern emotional expressions in diverse data modalities, such as text, images, or sound, with the goal of predicting emotions. In the past, sentiment classification predominantly centered on single modalities, with text-based sentiment analysis being a prominent area of research interest (De Carvalho & Costa, 2021). For instance, Tran & Phan (2018) utilized machine translation, logistic regression, and fuzzy rules to construct the Vietnamese sentiment dictionary and employed this dictionary along with semantic rules for precise sentiment classification of text. However, this approach overlooks the contextual information of the text. Xie et al. (2017) introduced a maximum entropy classification model based on probabilistic latent semantic analysis to learn important sentiment classification features to enhance sentiment classification performance. However, traditional machine learning algorithms frequently encounter challenges in extracting high-quality features from text, and the process of feature engineering typically demands significant human intervention, making it an inefficient procedure.

Many mainstream methods now incorporate neural networks to convert words into embedded representations. This reduces the reliance on feature engineering and thus achieves more comprehensive and in-depth extraction of semantic information in text.

Due to the prevalence of smart mobile devices, the way social media content is shared has gradually evolved from single text to a combination of images and short videos. Images emphasize key textual information, enabling fine content presentation and rich subjective emotional expression in image-text pairs. However, traditional single methods cannot effectively model multiple features, leading to the emergence of multimodal approaches. To address conflicts in the fusion of multimodal features, Huang et al. (2023) introduced a context-based adaptive multimodal fusion network. Low et al. (2024) utilized a long short-term memory-gated recurrent unit (LSTM-GRU) network to classify different types of sexual harassment. Li et al. (2023) proposed a hierarchical cross-modal interaction module that simultaneously models inter-modal relationships and intra-modal dynamics, delving deep into complementary semantics among missing modalities.

Multimodal modeling approaches have the potential to integrate available features to enhance sentiment analysis performance. However, several challenges persist. Firstly, the same vocabulary has different meanings in various contexts, making it unreasonable to represent this vocabulary with fixed embedding vectors. Secondly, significant differences exist between features from different modalities, making the effective fusion of multimodal features to preserve their common characteristics another challenging problem.

To enhance sentiment analysis performance, we introduce a deep feature interaction network (DFINet) for multimodal sentiment analysis. First, it constructs word-level graph structures and employs graph convolutional networks (GCN) to aggregate word embedding features. Then, it uses a deep attention interaction network (DAIN) to learn the contextual dependencies of the text, resulting in text representations. Additionally, it leverages a cross-modal attention network to learn the interaction information between text and image features. Experimental results validate the effectiveness of the DFINet method in this study. In summary, the main contributions of this paper include:

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing