Article Preview
TopIntroduction
Natural language processing (NLP) has become hot research in the field of artificial intelligence and deep learning. The current challenge is how to combine advanced natural language processing and machine learning models so that machines can understand the learned knowledge, express it as a kind of knowledge, and establish relevant connections (Liu et al., 2022; Mandle et al., 2022).
Named entity recognition (NER) is a fundamental task studied in NLP. Named entity recognition mainly involves extracting words or phrases from unstructured text that reflect concrete or abstract entities that already exist in the real world, such as names of people, places, and organizations, and organizing them into semistructured or structured information. Then, other techniques are used to analyze and understand the text (Isozaki & Kazawa, 2002; Li et al., 2019). In recent years, with deep learning research, this method can actively learn data feature representation from massive data and reduce the reliance on rules and expert knowledge to a certain extent.
Figure 1.
Chinese named entity identification flow chart
The flow of the traditional named entity recognition method is shown in Figure 1. Although deep learning methods have made significant progress in Chinese NER tasks, building NER models usually still requires a large amount of labeled data. Model performance is proportional to the amount of labeled data, which is poor in specific domains where the training corpus is scarce. Transfer learning aims to improve the learning performance of the target task by exploiting a large amount of labeled data in the source domain and pretrained models. It has become a powerful tool for solving resource-poor NER with its advantages of small data and labeling dependencies and relaxed independent homogeneous distribution constraints (Gong et al., 2018; Wang et al., 2017).
From the early word vectors obtained by cooccurrence matrix and SVD(Singular Value Decomposition) decomposition methods, neural network models based on different structures have been developed. At present, word vector models have been widely used in information extraction. Mikolov et al. (2013) proposed the Word2Vec model to generate word vectors. Xu et al. (2019) proposed a word vector representation based on the Word2Vec technique to trigger word classification tasks. However, the obvious drawback of Word2Vec is that it is “one-word-one-sense,” and the semantic meaning expressed by its word vectors is homogeneous. The ALBERT model is a lightweight network model developed by Lan et al. (2020) based on the transformer bidirectional encoder representations from transformers (BERT; Agrawal et al., 2022) pretrained language model. Therefore, this model uses ALBERT to obtain context-dependent word vectors. ALBERT has richer semantic information than the traditional Word2Vec model and can generate more appropriate feature representations for different NLP tasks. Thus, model performance is improved, made smaller, and more refined than the BERT model (Bikel et al.,1998).