A Bibliometric Analysis of Automated Writing Evaluation in Education Using VOSviewer and CitNetExplorer from 2008 to 2022

A Bibliometric Analysis of Automated Writing Evaluation in Education Using VOSviewer and CitNetExplorer from 2008 to 2022

Xinjie Deng
Copyright: © 2022 |Pages: 22
DOI: 10.4018/IJTEE.305807
Article PDF Download
Open access articles are freely available for download

Abstract

As technology develops by leaps and bounds, automated writing evaluation (AWE) has caught increasing attention worldwide. This study aims to provide an overview of research literature focusing on AWE used in education through bibliometric analysis. The data of studies (N = 815) published from 2008 to 2022 were analyzed using the VOSviewer and CitNetExplorer software. The results showed that the most cited countries were USA, China, and Canada. The most cited organizations were Georgia State University, University of Delaware, and Educational Testing Service. The most co-cited authors were Graham, Crossley, and Attali. The most used keywords were feedback, automated essay scoring, and natural language processing. It was also indicated that the previous studies mainly revolved around three themes, i.e., the design and development of AWE technology, the application of AWE in education, and the effect of AWE on linguistic features. This study provided references for entry into the AWE field and identifying future research directions.
Article Preview
Top

Introduction

With the advancement of computer science, a variety of technologies are often integrated with education. Online automated writing evaluation (AWE) is one of the most popular topics in artificial intelligence-enhanced language learning (Huang et al., 2021). In the field of education, formative writing assessment plays a significant role in writing practice since it informs students of both achievement levels and specific weaknesses (Stevenson & Phakiti, 2014). Feedback, as an essential component of formative writing assessment, is usually provided by an agent, including teachers who could give corrective information (Hattie & Timperley, 2007). However, faced with many essays written by students, teachers may struggle to provide immediate feedback within a short time. In this case, AWE could serve as an assistant tool to lessen teachers’ workload, contributing to the improvement of learners’ writing performance (Parra & Calero, 2019).

Automated writing evaluation (AWE) is a program or software that provides immediate computer-generated feedback and scoring on written texts (Shermis et al., 2013; Wilson, Ahrendt, et al., 2021). The core element of AWE systems is a scoring engine supported by technologies such as natural language processing and machine learning algorithms (Wilson & Roscoe, 2020). The natural language processing is responsible for linguistic, syntactic, semantic, and discourse features, while statistical algorithms are associated with generating holistic scores. Another central component of AWE technology is a feedback engine that provides detailed feedback to help learners revise their writing (Allen et al., 2016). Currently, widely used AWE platforms are Criterion, Write&Improve, My Access!, and WriteToLearn (Hockly, 2019).

There are various benefits of AWE implementation. The immediate feedback helped students develop their language and show confidence in submitting their essays (O’Neill & Russell, 2019). As scores would grow if students could revise their work based on the feedback, the iterative revision processes gave students opportunities to notice their progress, which promoted students’ writing motivation (Wilson, Ahrendt, et al., 2021). In addition to psychological traits, AWE also exerted positive effects on writing-related outcomes. Students using AWE systems significantly improved their writing accuracy mainly because they noticed suggestions, explanations, and color-coded lines (Barrot, 2021). Moreover, the automated feedback was as effective as comments made by human teachers when the comments were pertinent to structure, organization, conclusion, coherence, and supporting ideas (Liu et al., 2017).

Nevertheless, disadvantages of AWE lie in formulaic writing, overcorrection, and perceived negative emotions. Scores induced students to attach importance to formulaic writing that values quantity and complexity (Perelman, 2014). Computer-generated comments were criticized to mislead students about the nature of writing. Students tended to meet the standards of AWE systems by developing test-taking strategies or tricks, for example, increasing the number of words (Wilson, Ahrendt, et al., 2021). Occasionally, overcorrection may discourage and frustrate students since the program still suggested revisions even if there were no errors (Barrot, 2021). More importantly, when receiving automatic feedback, students experienced anxiety, pressure, and control, influencing their identity representations (Zaini, 2018).

Many researchers have investigated the use of AWE with different research methods. Wilson, Ahrendt, et al. (2021) adopted activity theory to qualitatively analyze elementary teachers’ perceptions of AWE programs, students’ writing motivation, and instructional challenges of AWE. Another study used a t-test to confirm the positive influence of AWE tools on undergraduate students’ writing performance (Parra & Calero, 2019). It was also found that AWE tools provided feedback in terms of grammar, punctuation, style, and mechanics.

Complete Article List

Search this Journal:
Reset
Volume 3: 1 Issue (2024)
Volume 2: 1 Issue (2023)
Volume 1: 1 Issue (2022)
View Complete Journal Contents Listing