A GCN- and Deep Biaffine Attention-Based Classification Model for Course Review Sentiment

A GCN- and Deep Biaffine Attention-Based Classification Model for Course Review Sentiment

Jiajia Jiao, Bo Chen
DOI: 10.4018/IJITSA.323568
Article PDF Download
Open access articles are freely available for download

Abstract

In recent years, the increasing use of online surveys for course evaluation in schools has led to an outpouring of evaluation texts. These texts, with their emotional polarity, can give schools the most direct feedback. Emotional analysis on course evaluation, therefore, has great implications. However, the not-so-rigid text grammar and rich text content pose a challenge for sentiment analysis in Chinese course evaluation. To solve this problem, this paper proposes a sentiment classification model BiLSTM-GCN-Att (BGAN). Here, BiLSTM is used to extract the features of the text and output the hidden state vector. Then, the deep biaffine attention mechanism is used to analyze the dependence of the text and generate a dependency matrix. Next, input the hidden state vector to the GCN. Finally, the softmax function is used as the output layer of the model to perform sentiment classification. The model proves effective and experimental results, showing that the BGAN achieved a maximum improvement of 11.02% and 14.47% in precision and F1-score respectively compared with the classical models.
Article Preview
Top

In recent years, great progress has been made in the field of aspect-level sentiment analysis, which can be divided into sentiment analysis methods based on sentiment dictionaries (Du et al., 2022), sentiment analysis methods based on traditional machine learning (Gang et al., 2017), and sentiment analysis methods based on deep learning (Wang et al., 2022).

The sentiment analysis method based on the sentiment lexicon matches the corresponding words in the text according to the sentiment words in the sentiment lexicon, and then the weighted calculation obtains the score and determines the category to which the sentiment belongs. There are AFINN lexicon and SentiWordnet for English sentiment lexicon, and Chinese sentiment lexicon cnsenti published by Dalian Li University, Hownet published by Zhiwang, and NTUSD published by National Taiwan University. This approach is more traditional and nowadays depends on the size of the sentiment lexicon, which is less adapted to the current rapidly updated student language (Sivakumar et al., 2017). Today, there are research efforts to improve the performance of the model by combining sentiment dictionaries with methods such as deep learning (Madani et al., 2020; Yang et al., 2020).

Traditional machine learning-based sentiment analysis methods use statistical machine learning-related algorithms for sentiment determination, mainly using K-Nearest Neighbor (KNN), plain Bayesian, support vector machine (SVM) (Lin et al., 2019), and other related methods.

More deep learning-based sentiment analysis methods now come into use. Deep learning-based sentiment analysis methods use deep network models such as CNN (Liang et al., 2022) and RNN for feature extraction, with techniques such as Long Short-Term Memory (LSTM) and attention mechanism to enhance feature extraction of overall and local information of text (Zhang et al., 2022; Kim et al., 2014). Convolutional neural networks are used on pre-trained word vectors to perform sentence-level text classification, showing improved accuracy compared to traditional machine learning methods (Wu et al., 2022). Song (2019) proposed an attention network to model between context and target entities. Zhang et al. (2019) proposed using graph convolutional neural networks to learn feature representations from syntactic dependencies and fuse other types of features for use on aspect-level sentiment analysis tasks. Liao et al. used RoBERTa for sentiment analysis and their use of RoBERTa is based on deep bidirectional Transformer for sentiment analysis (Liao et al., 2021).

Since text sentiment analysis is often conducted in different contextual areas, and there could be significant deviation in text features, text sentiment analysis is usually carried out in a specific area. Wu et al. (2022) used the capsule network for microblogging text sentiment analysis to improve the efficiency of monitoring public opinion. Fu et al. (2022) studied the text of ancient poetry in the emotional ambiguity and found the short text is a problem for text sentiment analysis. Zhang et al. (2022) constructed a sentiment analysis dataset for such service places as the electric power business hall and used sentiment analysis to help the business hall to improve service quality and user experience in a more targeted manner.

Currently, there is a lack of sentiment analysis for Chinese teaching evaluation and many models intended for English corpus cannot be directly applied to Chinese corpus. In addition, the text in Chinese teaching evaluation is featured by short length, rich expression content, not-so-rigid grammar, and unclear reference. The language habits of the student group give the text data some features distinct from those in other fields. This paper proposes a specific aspect-oriented sentiment classification model BGAN, which uses BiLSTM to obtain text context information from the front and rear directions and outputs hidden vectors, uses the deep biaffine attention mechanism to analyze text dependencies and calculates the scores of different dependency arcs to build a dependency tree, and then inputs hidden vectors to the GCN with a non-aspect word masking layer to obtain aspect features, and finally inputs the aspect features to the softmax function for sentiment classification.

Complete Article List

Search this Journal:
Reset
Volume 17: 1 Issue (2024)
Volume 16: 3 Issues (2023)
Volume 15: 3 Issues (2022)
Volume 14: 2 Issues (2021)
Volume 13: 2 Issues (2020)
Volume 12: 2 Issues (2019)
Volume 11: 2 Issues (2018)
Volume 10: 2 Issues (2017)
Volume 9: 2 Issues (2016)
Volume 8: 2 Issues (2015)
Volume 7: 2 Issues (2014)
Volume 6: 2 Issues (2013)
Volume 5: 2 Issues (2012)
Volume 4: 2 Issues (2011)
Volume 3: 2 Issues (2010)
Volume 2: 2 Issues (2009)
Volume 1: 2 Issues (2008)
View Complete Journal Contents Listing