Article Preview
TopIntroduction
With the rapid development and popularization new media, social networks, and other platforms (Park, 2023), more and more users are inclined to use multimodal forms of data to express their opinions and emotions, with the most used method being the combination of images and text (Bie et al., 2023; Capecchi et al., 2022). Effective emotional analysis of these massive and diverse forms of social media data helps to better understand public emotions and opinion tendencies, thereby providing scientific basis for government and enterprise decision-making (Banik et al., 2023; Wijayanti & Arisal, 2021).
The traditional single modal sentiment analysis method only uses a certain type of information as the analysis object, which cannot meet the needs of multimodal data (Xu et al., 2019). In this situation, multimodal sentiment analysis has emerged, which will extract and fuse features from multiple modalities of information published by users, thereby more accurately analyzing and predicting their emotions (Huang, Yang et al., 2019; Xiao et al., 2021). However, the existing multimodal sentiment analysis models for image and text fusion still have some shortcomings:
- 1.
Features of different modalities are usually simply concatenated, making it difficult to effectively fuse deep multimodal emotional features.
- 2.
The image information published by social media users may not necessarily be associated with every word in the text. Existing methods do not measure the importance of words in the text based on the specific features of the image, but directly fuse the image and text features, which will directly affect the final emotional classification results.
- 3.
Because aspect-based sentiment analysis belongs to fine-grained sentiment analysis, many existing multimodal sentiment analysis methods lack the ability to solve such problems (Dai et al., 2021).
To address the aforementioned issues, the authors propose an image and text aspect level multimodal sentiment analysis model that based on transformer and multimodal multilayer attention interaction (TF-MMATI). The main contributions are as follows:
- 1.
Using RoBERTa for pretraining and utilizing BiLSTM to fully extract deep semantic information, aiming to better extract features of text modality.
- 2.
To address the issue of ineffective fusion of image and text information, the proposed model aggregates text and image features through Transformer encoder to eliminate the problem of feature differences between text and image modalities.
- 3.
Due to the fact that traditional multimodal image and text sentiment analysis models typically only fuse image and text information, with little consideration given to aspect level, the proposed model utilizes attention interaction mechanisms to weight the features of text modality and image modality at the aspect level respectively, in order to solve the inconsistency problem between image data and text data. At the same time, the researchers solved the correlation problem between text modality and image modality through a deep attention interaction mechanism.
TopThe research closely related to aspect level sentiment analysis of graphic and textual data mainly includes text aspect level sentiment analysis, image sentiment analysis, and graphic and textual sentiment analysis (Alahmary & Al-Dossari, 2023; Li et al., 2022; Mittal & Agrawal, 2022).