Article Preview
TopIntroduction
Due to the evolution of the internet and the maturity of smart devices, individuals increasingly tend to express their opinions on various social media platforms. The application of sentiment analysis technology enables the automatic extraction of general emotional trends and in-depth analysis of inherent emotional nuances from subjective text. This not only aids governments in gaining a deeper understanding of online sentiment trends during critical events but also assists businesses in comprehensively grasping user interests and trends. It also makes it easier for consumers to evaluate the merits and demerits of products. Consequently, sentiment analysis tasks have significant research importance.
Sentiment analysis is the process of employing efficient algorithms to discern emotional expressions in diverse data modalities, such as text, images, or sound, with the goal of predicting emotions. In the past, sentiment classification predominantly centered on single modalities, with text-based sentiment analysis being a prominent area of research interest (De Carvalho & Costa, 2021). For instance, Tran & Phan (2018) utilized machine translation, logistic regression, and fuzzy rules to construct the Vietnamese sentiment dictionary and employed this dictionary along with semantic rules for precise sentiment classification of text. However, this approach overlooks the contextual information of the text. Xie et al. (2017) introduced a maximum entropy classification model based on probabilistic latent semantic analysis to learn important sentiment classification features to enhance sentiment classification performance. However, traditional machine learning algorithms frequently encounter challenges in extracting high-quality features from text, and the process of feature engineering typically demands significant human intervention, making it an inefficient procedure.
Many mainstream methods now incorporate neural networks to convert words into embedded representations. This reduces the reliance on feature engineering and thus achieves more comprehensive and in-depth extraction of semantic information in text.
Due to the prevalence of smart mobile devices, the way social media content is shared has gradually evolved from single text to a combination of images and short videos. Images emphasize key textual information, enabling fine content presentation and rich subjective emotional expression in image-text pairs. However, traditional single methods cannot effectively model multiple features, leading to the emergence of multimodal approaches. To address conflicts in the fusion of multimodal features, Huang et al. (2023) introduced a context-based adaptive multimodal fusion network. Low et al. (2024) utilized a long short-term memory-gated recurrent unit (LSTM-GRU) network to classify different types of sexual harassment. Li et al. (2023) proposed a hierarchical cross-modal interaction module that simultaneously models inter-modal relationships and intra-modal dynamics, delving deep into complementary semantics among missing modalities.
Multimodal modeling approaches have the potential to integrate available features to enhance sentiment analysis performance. However, several challenges persist. Firstly, the same vocabulary has different meanings in various contexts, making it unreasonable to represent this vocabulary with fixed embedding vectors. Secondly, significant differences exist between features from different modalities, making the effective fusion of multimodal features to preserve their common characteristics another challenging problem.
To enhance sentiment analysis performance, we introduce a deep feature interaction network (DFINet) for multimodal sentiment analysis. First, it constructs word-level graph structures and employs graph convolutional networks (GCN) to aggregate word embedding features. Then, it uses a deep attention interaction network (DAIN) to learn the contextual dependencies of the text, resulting in text representations. Additionally, it leverages a cross-modal attention network to learn the interaction information between text and image features. Experimental results validate the effectiveness of the DFINet method in this study. In summary, the main contributions of this paper include: