Fusing Syntax and Semantics-Based Graph Convolutional Network for Aspect-Based Sentiment Analysis

Fusing Syntax and Semantics-Based Graph Convolutional Network for Aspect-Based Sentiment Analysis

Jinhui Feng, Shaohua Cai, Kuntao Li, Yifan Chen, Qianhua Cai, Hongya Zhao
Copyright: © 2023 |Pages: 15
DOI: 10.4018/IJDWM.319803
Article PDF Download
Open access articles are freely available for download

Abstract

Aspect-based sentiment analysis (ABSA) aims to classify the sentiment polarity of a given aspect in a sentence or document, which is a fine-grained task of natural language processing. Recent ABSA methods mainly focus on exploiting the syntactic information, the semantic information and both. Research on cognition theory reveals that the syntax an*/874d the semantics have effects on each other. In this work, a graph convolutional network-based model that fuses the syntactic information and semantic information in line with the cognitive practice is proposed. To start with, the GCN is taken to extract syntactic information on the syntax dependency tree. Then, the semantic graph is constructed via a multi-head self-attention mechanism and encoded by GCN. Furthermore, a parameter-sharing GCN is developed to capture the common information between the semantics and the syntax. Experiments conducted on three benchmark datasets (Laptop14, Restaurant14 and Twitter) validate that the proposed model achieves compelling performance comparing with the state-of-the-art models.
Article Preview
Top

Introduction

Aspect-based sentiment analysis (ABSA), a crucial task in fine-grained sentiment analysis, aims at automatically inferring the sentiment toward an aspect within its context. Generally, the sentiment of the given aspect is classified as positive, neural, or negative. Consider the following sentence: “I liked the atmosphere very much, but the food was not worth the price.” The sentiment of the “atmosphere” and “food” aspects are positive and negative, respectively.

So far, several state-of-the-art methods have been developed based on dual-channel graph convolutional networks (GCNs). These deal with syntactic and semantic information, which obtain satisfying results in ABSA tasks. Specifically, for both syntax and semantics processing, the fundamental idea is to reduce the distance between the aspect and its contextual words. In such a manner, the sentiment information of the aspect can be captured for sentiment classification. In the syntax-based approach, the dependency between the aspect and context is built and parsed to extract the syntactic information. Regarding long-term dependency, current models like ASGCN (Zhang et al., 2019) and CDT (Sun et al., 2019) exploit the GCN to establish adjacency matrices and derive the syntactic relation. However, a large amount of user-generated content involves informal grammatic style, such as text on Twitter. For this reason, exploiting semantic information for ABSA is highlighted.

Most widely applied methods employ attention mechanisms to perform the interactions between aspect and its context. With the application of GCN, the attention matrix of the sentence is established and fed into GCN for semantic feature extraction (Guo et al., 2019). As such, the widespread use of syntactic- and semantic-GCN gives rise to the advances in dual-channel GCN methods. Generally, dual-channel GCNs are carried out in two ways. The first is to separately extract syntax and semantics before concatenating the syntactic and semantic representations (Pang et al., 2021). The second is to fuse these two categories of features during information encoding (Yan et al., 2021).

As with many facets of the natural language processing (NLP) task, a major challenge lies in teaching a computer to handle data that is distinctly human (Brooke, 2009). As such, the first step in ABSA method devising is to establish information flow, which directs the sentiment delivery from opinion words to the aspect. According to Pylkkänen (2020), the syntactic effects are performed earlier than the semantic effects during natural language comprehending. Concretely, measured by magnetoencephalography, the posterior middle/superior temporal gyrus (pM/STG), which processes the syntactic information, activates before the left anterior temporal lobe (LATL) and the ventromedial prefrontal cortex (vmPFC), whose purpose is to tackle the semantics (see Figure 1). Despite the order of precedence, the syntactic effects and semantic effects are difficult to distinguish from each other (Pylkkänen, 2019), indicated by the intersection of pM/STG and LATL in Figure 1.

Figure 1.

MEG Results on Processing Stages of Language Comprehension

IJDWM.319803.f01

In terms of recent ABSA approaches, two limitations can be observed. First, the semantics and syntax are generally processed in two separate channels. They do not consider the sequences. Second, in most cases, syntactic changes vary the meaning of the expression. The interaction between syntactic effect and semantic effect, referred to as the common information, remains neglected in ABSA tasks.

A fuse syntax and semantics-based graph convolutional network (FSSGCN) is developed for ABSA to mitigate the deficiencies of current ABSA methods. In the proposed model, the syntax structure of the sentence is resolved. Then, the semantic information is captured and fused with the syntactic information to enhance the sentiment delivery. Further, a common information module is built using GCN. It collects the information from both syntax and semantics to facilitate the sentiment classification. This work contains three main contributions as follows:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing