Voluntary use of automated writing evaluation by content course students


Creative Commons License

Saricaoglu A., Bilki Z.

RECALL, vol.33, no.3, pp.265-277, 2021 (AHCI) identifier identifier

  • Publication Type: Article / Article
  • Volume: 33 Issue: 3
  • Publication Date: 2021
  • Doi Number: 10.1017/s0958344021000021
  • Journal Name: RECALL
  • Journal Indexes: Arts and Humanities Citation Index (AHCI), Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, Computer & Applied Sciences, EBSCO Education Source, Educational research abstracts (ERA), ERIC (Education Resources Information Center), INSPEC, Linguistics & Language Behavior Abstracts, MLA - Modern Language Association Database, Psycinfo
  • Page Numbers: pp.265-277
  • Keywords: automated writing evaluation, voluntary use, accuracy improvement, content course students, WRITTEN CORRECTIVE FEEDBACK, ESSAY EVALUATION, TEACHER, ENGLISH, TECHNOLOGY, IMPACT, ACCURACY
  • TED University Affiliated: Yes

Abstract

Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students' performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users' errors from the first draft to the last in 11 error categories in total for the two assignments.