Voluntary use of automated writing evaluation by content course students


Creative Commons License

Saricaoglu A., Bilki Z.

RECALL, cilt.33, sa.3, ss.265-277, 2021 (AHCI) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 33 Sayı: 3
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1017/s0958344021000021
  • Dergi Adı: RECALL
  • Derginin Tarandığı İndeksler: Arts and Humanities Citation Index (AHCI), Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, Computer & Applied Sciences, EBSCO Education Source, Educational research abstracts (ERA), ERIC (Education Resources Information Center), INSPEC, Linguistics & Language Behavior Abstracts, MLA - Modern Language Association Database, Psycinfo
  • Sayfa Sayıları: ss.265-277
  • Anahtar Kelimeler: automated writing evaluation, voluntary use, accuracy improvement, content course students, WRITTEN CORRECTIVE FEEDBACK, ESSAY EVALUATION, TEACHER, ENGLISH, TECHNOLOGY, IMPACT, ACCURACY
  • TED Üniversitesi Adresli: Evet

Özet

Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students' performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users' errors from the first draft to the last in 11 error categories in total for the two assignments.