HBert: A Long Text Processing Method Based on BERT and Hierarchical Attention Mechanisms

HBert: A Long Text Processing Method Based on BERT and Hierarchical Attention Mechanisms

Xueqiang Lv, Zhaonan Liu, Ying Zhao, Ge Xu, Xindong You
Copyright: © 2023 |Pages: 14
DOI: 10.4018/IJSWIS.322769
Article PDF Download
Open access articles are freely available for download

Abstract

With the emergence of a large-scale pre-training model based on the transformer model, the effect of all-natural language processing tasks has been pushed to a new level. However, due to the high complexity of the transformer's self-attention mechanism, these models have poor processing ability for long text. Aiming at solving this problem, a long text processing method named HBert based on Bert and hierarchical attention neural network is proposed. Firstly, the long text is divided into multiple sentences whose vectors are obtained through the word encoder composed of Bert and the word attention layer. And the article vector is obtained through the sentence encoder that is composed of transformer and sentence attention. Then the article vector is used to complete the subsequent tasks. The experimental results show that the proposed HBert method achieves good results in text classification and QA tasks. The F1 value is 95.7% in longer text classification tasks and 75.2% in QA tasks, which are better than the state-of-the-art model longformer.
Article Preview
Top

Introduction

The Transformer (Vaswani et al., 2017) model achieves excellent results in many natural language processing tasks, including classification, text generation, etc., while advancing the birth of many Transformer-based large-scale pre-trained models such as BERT (Devlin et al., 2018). The Transformer model computes the attention between each token in a sentence by self-attention, obtains the semantic information of each word and the semantic relations between words, and uses positional encoding to obtain the positional information of each word in each sentence. It can capture the whole context of a sequence in these two ways, which also leads to the success of the Transformer model. However, due to Transformer's self-attention mechanism complexity, the time complexity of the Transformer is IJSWIS.322769.m01(where IJSWIS.322769.m02 is the sequence length and IJSWIS.322769.m03 is the dimension of the hidden layer), which results in a limited length of the text that can be processed. Theoretically, the Transformer model can input text of arbitrary length, but due to its large complexity the Transformer cannot handle excessively long text in practical applications. The maximum length that can be processed depends on the actual situation. This also leads to a limited sequence length that Transformer-based models such as BERT can process, which is generally limited by the performance of computer hardware. While the BERT model limits the maximum input sequence length to 512 tokens, this does not mean that the Bert model can handle sentences of 512 words. The BERT model's tokenizer divides an input word into multiple subwords and adds various special tags, such as [CLS] and [SEP], which results in the length of text that BERT can handle being much less than 512 words. Since the number of subwords divided per word is not certain, the BERT does not have a fixed maximum input text length. It is certain that the maximum processable text length is less than 512 words. However, most of the texts, such as press releases, patent texts, etc., are much longer than 512 words. These longer texts are not as easy to process and cannot be processed directly by the BERT model, which limits its use in long texts processing.

Generally, the BERT processes the long text using four types of methods. The first is the truncation method in which the text of a certain length at the beginning or end of the text will be truncated and processed as the original text. This method retains only a small section of text at the beginning or end of the text while the rest of the text is discarded. This method loses a lot of text information. The second is the segmentation method in which the long text is divided into multiple short texts. This method truncates the text into multiple parts according to a fixed length. Each part is encoded using the model to obtain a vector and then the obtained vector is stitched to obtain the text vector. The third method is the compression method in which the long text is divided into multiple short texts and then the meaningless paragraphs are selected and deleted using rules or training with other filtering or scoring models. The effect of the compression method is severely limited to the effect of the filtering method. The fourth method involves changing the structure of the model because the high complexity of the Transformer mainly lies in the self-attention mechanism. The complexity of the self-attention mechanism is reduced by this method through limiting the scope and the way of capturing information, thereby improving the model's ability to process long text.

Aiming at solving some drawbacks of the existing methods mentioned above, the HBert method is proposed in this paper to improve the ability of the BERT model to process long text. In summary, the main contributions of this article are as follows:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing