Article Preview
TopIntroduction
With the advancement of computer science, a variety of technologies are often integrated with education. Online automated writing evaluation (AWE) is one of the most popular topics in artificial intelligence-enhanced language learning (Huang et al., 2021). In the field of education, formative writing assessment plays a significant role in writing practice since it informs students of both achievement levels and specific weaknesses (Stevenson & Phakiti, 2014). Feedback, as an essential component of formative writing assessment, is usually provided by an agent, including teachers who could give corrective information (Hattie & Timperley, 2007). However, faced with many essays written by students, teachers may struggle to provide immediate feedback within a short time. In this case, AWE could serve as an assistant tool to lessen teachers’ workload, contributing to the improvement of learners’ writing performance (Parra & Calero, 2019).
Automated writing evaluation (AWE) is a program or software that provides immediate computer-generated feedback and scoring on written texts (Shermis et al., 2013; Wilson, Ahrendt, et al., 2021). The core element of AWE systems is a scoring engine supported by technologies such as natural language processing and machine learning algorithms (Wilson & Roscoe, 2020). The natural language processing is responsible for linguistic, syntactic, semantic, and discourse features, while statistical algorithms are associated with generating holistic scores. Another central component of AWE technology is a feedback engine that provides detailed feedback to help learners revise their writing (Allen et al., 2016). Currently, widely used AWE platforms are Criterion, Write&Improve, My Access!, and WriteToLearn (Hockly, 2019).
There are various benefits of AWE implementation. The immediate feedback helped students develop their language and show confidence in submitting their essays (O’Neill & Russell, 2019). As scores would grow if students could revise their work based on the feedback, the iterative revision processes gave students opportunities to notice their progress, which promoted students’ writing motivation (Wilson, Ahrendt, et al., 2021). In addition to psychological traits, AWE also exerted positive effects on writing-related outcomes. Students using AWE systems significantly improved their writing accuracy mainly because they noticed suggestions, explanations, and color-coded lines (Barrot, 2021). Moreover, the automated feedback was as effective as comments made by human teachers when the comments were pertinent to structure, organization, conclusion, coherence, and supporting ideas (Liu et al., 2017).
Nevertheless, disadvantages of AWE lie in formulaic writing, overcorrection, and perceived negative emotions. Scores induced students to attach importance to formulaic writing that values quantity and complexity (Perelman, 2014). Computer-generated comments were criticized to mislead students about the nature of writing. Students tended to meet the standards of AWE systems by developing test-taking strategies or tricks, for example, increasing the number of words (Wilson, Ahrendt, et al., 2021). Occasionally, overcorrection may discourage and frustrate students since the program still suggested revisions even if there were no errors (Barrot, 2021). More importantly, when receiving automatic feedback, students experienced anxiety, pressure, and control, influencing their identity representations (Zaini, 2018).
Many researchers have investigated the use of AWE with different research methods. Wilson, Ahrendt, et al. (2021) adopted activity theory to qualitatively analyze elementary teachers’ perceptions of AWE programs, students’ writing motivation, and instructional challenges of AWE. Another study used a t-test to confirm the positive influence of AWE tools on undergraduate students’ writing performance (Parra & Calero, 2019). It was also found that AWE tools provided feedback in terms of grammar, punctuation, style, and mechanics.