Rating Short L2 Essays on the CEFR Scale with GPT-4

Kevin P. Yancey, Geoffrey Laflair, Anthony Verardi, Jill Burstein


Abstract
Essay scoring is a critical task used to evaluate second-language (L2) writing proficiency on high-stakes language assessments. While automated scoring approaches are mature and have been around for decades, human scoring is still considered the gold standard, despite its high costs and well-known issues such as human rater fatigue and bias. The recent introduction of large language models (LLMs) brings new opportunities for automated scoring. In this paper, we evaluate how well GPT-3.5 and GPT-4 can rate short essay responses written by L2 English learners on a high-stakes language assessment, computing inter-rater agreement with human ratings. Results show that when calibration examples are provided, GPT-4 can perform almost as well as modern Automatic Writing Evaluation (AWE) methods, but agreement with human ratings can vary depending on the test-taker’s first language (L1).
Anthology ID:
2023.bea-1.49
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
576–584
Language:
URL:
https://aclanthology.org/2023.bea-1.49
DOI:
10.18653/v1/2023.bea-1.49
Bibkey:
Cite (ACL):
Kevin P. Yancey, Geoffrey Laflair, Anthony Verardi, and Jill Burstein. 2023. Rating Short L2 Essays on the CEFR Scale with GPT-4. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 576–584, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Rating Short L2 Essays on the CEFR Scale with GPT-4 (Yancey et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.49.pdf