Image and Text Aspect Level Multimodal Sentiment Classification Model Using Transformer and Multilayer Attention Interaction

Image and Text Aspect Level Multimodal Sentiment Classification Model Using Transformer and Multilayer Attention Interaction

Xiuye Yin, Liyong Chen
Copyright: © 2023 |Pages: 20
DOI: 10.4018/IJDWM.333854
Article PDF Download
Open access articles are freely available for download

Abstract

Many existing image and text sentiment analysis methods only consider the interaction between image and text modalities, while ignoring the inconsistency and correlation of image and text data, to address this issue, an image and text aspect level multimodal sentiment analysis model using transformer and multi-layer attention interaction is proposed. Firstly, ResNet50 is used to extract image features, and RoBERTa-BiLSTM is used to extract text and aspect level features. Then, through the aspect direct interaction mechanism and deep attention interaction mechanism, multi-level fusion of aspect information and graphic information is carried out to remove text and images unrelated to the given aspect. The emotional representations of text data, image data, and aspect type sentiments are concatenated, fused, and fully connected. Finally, the designed sentiment classifier is used to achieve sentiment analysis in terms of images and texts. This effectively has improved the performance of sentiment discrimination in terms of graphics and text.
Article Preview
Top

Introduction

With the rapid development and popularization new media, social networks, and other platforms (Park, 2023), more and more users are inclined to use multimodal forms of data to express their opinions and emotions, with the most used method being the combination of images and text (Bie et al., 2023; Capecchi et al., 2022). Effective emotional analysis of these massive and diverse forms of social media data helps to better understand public emotions and opinion tendencies, thereby providing scientific basis for government and enterprise decision-making (Banik et al., 2023; Wijayanti & Arisal, 2021).

The traditional single modal sentiment analysis method only uses a certain type of information as the analysis object, which cannot meet the needs of multimodal data (Xu et al., 2019). In this situation, multimodal sentiment analysis has emerged, which will extract and fuse features from multiple modalities of information published by users, thereby more accurately analyzing and predicting their emotions (Huang, Yang et al., 2019; Xiao et al., 2021). However, the existing multimodal sentiment analysis models for image and text fusion still have some shortcomings:

  • 1.

    Features of different modalities are usually simply concatenated, making it difficult to effectively fuse deep multimodal emotional features.

  • 2.

    The image information published by social media users may not necessarily be associated with every word in the text. Existing methods do not measure the importance of words in the text based on the specific features of the image, but directly fuse the image and text features, which will directly affect the final emotional classification results.

  • 3.

    Because aspect-based sentiment analysis belongs to fine-grained sentiment analysis, many existing multimodal sentiment analysis methods lack the ability to solve such problems (Dai et al., 2021).

To address the aforementioned issues, the authors propose an image and text aspect level multimodal sentiment analysis model that based on transformer and multimodal multilayer attention interaction (TF-MMATI). The main contributions are as follows:

  • 1.

    Using RoBERTa for pretraining and utilizing BiLSTM to fully extract deep semantic information, aiming to better extract features of text modality.

  • 2.

    To address the issue of ineffective fusion of image and text information, the proposed model aggregates text and image features through Transformer encoder to eliminate the problem of feature differences between text and image modalities.

  • 3.

    Due to the fact that traditional multimodal image and text sentiment analysis models typically only fuse image and text information, with little consideration given to aspect level, the proposed model utilizes attention interaction mechanisms to weight the features of text modality and image modality at the aspect level respectively, in order to solve the inconsistency problem between image data and text data. At the same time, the researchers solved the correlation problem between text modality and image modality through a deep attention interaction mechanism.

Top

The research closely related to aspect level sentiment analysis of graphic and textual data mainly includes text aspect level sentiment analysis, image sentiment analysis, and graphic and textual sentiment analysis (Alahmary & Al-Dossari, 2023; Li et al., 2022; Mittal & Agrawal, 2022).

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing