Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism

Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism

Zhang Runmei, Li Lulu, Yin Lei, Liu Jingjing, Xu Weiyi, Cao Weiwei, Chen Zhong
Copyright: © 2022 |Pages: 20
DOI: 10.4018/IJSWIS.313946
Article PDF Download
Open access articles are freely available for download

Abstract

For Chinese NER tasks, there is very little annotation data available. To increase the dataset, improve the accuracy of Chinese NER task, and improve the model's stability, the authors propose a method to add local adversarial training to the transfer learning model and integrate the attention mechanism. The model uses ALBERT for migration pre-training and adds perturbation factors to the output matrix of the embedding layer to constitute local adversarial training. BILSTM is used to encode the shared and private features of the task, and the attention mechanism is introduced to capture the characters that focus more on the entities. Finally, the best entity annotation is obtained by CRF. Experiments are conducted on People's Daily 2004 and Tsinghua University open-source text classification datasets. The experimental results show that compared with the SOTA model, the F1 values of the proposed method in this paper are improved by 7.32 and 7.98 in the two different datasets, respectively, proving that the accuracy of the method in this paper is improved in the Chinese domain.
Article Preview
Top

Introduction

Natural language processing (NLP) has become hot research in the field of artificial intelligence and deep learning. The current challenge is how to combine advanced natural language processing and machine learning models so that machines can understand the learned knowledge, express it as a kind of knowledge, and establish relevant connections (Liu et al., 2022; Mandle et al., 2022).

Named entity recognition (NER) is a fundamental task studied in NLP. Named entity recognition mainly involves extracting words or phrases from unstructured text that reflect concrete or abstract entities that already exist in the real world, such as names of people, places, and organizations, and organizing them into semistructured or structured information. Then, other techniques are used to analyze and understand the text (Isozaki & Kazawa, 2002; Li et al., 2019). In recent years, with deep learning research, this method can actively learn data feature representation from massive data and reduce the reliance on rules and expert knowledge to a certain extent.

Figure 1.

Chinese named entity identification flow chart

IJSWIS.313946.f01

The flow of the traditional named entity recognition method is shown in Figure 1. Although deep learning methods have made significant progress in Chinese NER tasks, building NER models usually still requires a large amount of labeled data. Model performance is proportional to the amount of labeled data, which is poor in specific domains where the training corpus is scarce. Transfer learning aims to improve the learning performance of the target task by exploiting a large amount of labeled data in the source domain and pretrained models. It has become a powerful tool for solving resource-poor NER with its advantages of small data and labeling dependencies and relaxed independent homogeneous distribution constraints (Gong et al., 2018; Wang et al., 2017).

From the early word vectors obtained by cooccurrence matrix and SVD(Singular Value Decomposition) decomposition methods, neural network models based on different structures have been developed. At present, word vector models have been widely used in information extraction. Mikolov et al. (2013) proposed the Word2Vec model to generate word vectors. Xu et al. (2019) proposed a word vector representation based on the Word2Vec technique to trigger word classification tasks. However, the obvious drawback of Word2Vec is that it is “one-word-one-sense,” and the semantic meaning expressed by its word vectors is homogeneous. The ALBERT model is a lightweight network model developed by Lan et al. (2020) based on the transformer bidirectional encoder representations from transformers (BERT; Agrawal et al., 2022) pretrained language model. Therefore, this model uses ALBERT to obtain context-dependent word vectors. ALBERT has richer semantic information than the traditional Word2Vec model and can generate more appropriate feature representations for different NLP tasks. Thus, model performance is improved, made smaller, and more refined than the BERT model (Bikel et al.,1998).

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing