Hierarchical Hybrid Neural Networks With Multi-Head Attention for Document Classification

Hierarchical Hybrid Neural Networks With Multi-Head Attention for Document Classification

Weihao Huang, Jiaojiao Chen, Qianhua Cai, Xuejie Liu, Yudong Zhang, Xiaohui Hu
Copyright: © 2022 |Pages: 16
DOI: 10.4018/IJDWM.303673
Article PDF Download
Open access articles are freely available for download

Abstract

Document classification is a research topic aiming to predict the overall text sentiment polarity with the advent of deep neural networks. Various deep learning algorithms have been employed in the current studies to improve classification performance. To this end, this paper proposes a hierarchical hybrid neural network with multi-head attention (HHNN-MHA) model on the task of document classification. The proposed model contains two layers to deal with the word-sentence level and sentence-document level classification respectively. In the first layer, CNN is integrated into Bi-GRU and a multi-head attention mechanism is employed, in order to exploit local and global features. Then, both Bi-GRU and attention mechanism are applied to document processing and classification in the second layer. Experiments on four datasets demonstrate the effectiveness of the proposed method. Compared to the state-of-art methods, our model achieves competitive results in document classification in terms of experimental performance.
Article Preview
Top

Introduction

There has been research interest towards the improvements in textual information processing in the past decade (Ali et al., 2017). Accompanying the evolution of computer technology, the volume of text data available online has had strong growth in recent years. As an important branch in the field of natural language processing (NLP), document classification aims to determine the sentiment polarity of a document, which plays a pivot role in a variety of tasks, including spam email filtering (Liu & Wang, 2010), topic extraction (Sarioglu, 2014), sentiment analysis (Hu et al., 2015), social public opinion mining (Guan et al., 2009) and more. In most cases, the discussed documents are ranked with different scores or stars representing the corresponding sentiment, while a higher score generally indicates more positive sentiment. With an accurate comprehension of the sentiment results and a deep understanding of the given document, the performance of document classification can be improved accordingly.

Advances in deep learning algorithms give rise to new opportunities to promote the efficacy of NLP tasks significantly. State-of-the-art document classification approaches are typically dominated by two distinguishing neural networks: the convolutional neural network (CNN), and the recurrent neural network (RNN). Recent publications report the superiority of the RNN in dealing with sequential inputs of various lengths. That is, the RNN models are capable of not only modeling the long-term dependencies (Habimana et al., 2020), but can also capture the semantics within contextual information (Du et al., 2019). More specifically, the two most well-known RNNs, namely long short-term memory (LSTM) and gated recurrent unit (GRU), are employed as a key module for tackling such issues in miscellaneous NLP methods (Li et al., 2019). On the other hand, the CNN is more effective in extracting the sentiment-related features from word sequences in comparison with the RNN (Du et al., 2019). The main reason is that the CNN can make full use of the textual data to collect the feature vectors with minimum parameters. In such a manner, the local importance from salient parts is thus captured (Zhao et al., 2021).

In order to improve document classification accuracy, the absence of sentiment information within long-distance texts needs to be considered comprehensively. To address this issue, the attention mechanism is employed, which supplements and enhances the long-distance sentiment information delivering in either the CNN or the RNN models. To be more specific, the attention mechanisms identify the significance in exploiting the hidden states and computing the class distributions, based on how to determine the attentive weights of different words (Liang et al., 2017). In this way, the models that integrate the attention mechanism into the RNN/CNN are able to model the textual data regardless of the distances. While restricted to the architecture of deep-learning algorithms, the attention mechanism still shows its distinctiveness by precisely capturing the sentiment of a specific part from the document. In fact, an attention model proposed by Google, namely Transformer, is implemented based solely on the attention mechanisms, without using the neural network structure (Vaswani et al., 2017). This work allows for multi-attention layers running in parallel, and outperforms all previously reported ensembles, which sets the foundation of the multi-head attention network.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing