Transformer-based Hebrew NLP models for Short Answer Scoring in Biology

Abigail Gurin Schleifer, Beata Beigman Klebanov, Moriah Ariely, Giora Alexandron


Abstract
Pre-trained large language models (PLMs) are adaptable to a wide range of downstream tasks by fine-tuning their rich contextual embeddings to the task, often without requiring much task-specific data. In this paper, we explore the use of a recently developed Hebrew PLM aleph-BERT for automated short answer grading of high school biology items. We show that the alephBERT-based system outperforms a strong CNN-based baseline, and that it general-izes unexpectedly well in a zero-shot paradigm to items on an unseen topic that address the same underlying biological concepts, opening up the possibility of automatically assessing new items without item-specific fine-tuning.
Anthology ID:
2023.bea-1.46
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
550–555
Language:
URL:
https://aclanthology.org/2023.bea-1.46
DOI:
10.18653/v1/2023.bea-1.46
Bibkey:
Cite (ACL):
Abigail Gurin Schleifer, Beata Beigman Klebanov, Moriah Ariely, and Giora Alexandron. 2023. Transformer-based Hebrew NLP models for Short Answer Scoring in Biology. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 550–555, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Transformer-based Hebrew NLP models for Short Answer Scoring in Biology (Gurin Schleifer et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.46.pdf