Improving Domain Generalization for Prompt-Aware Essay Scoring via Disentangled Representation Learning

Zhiwei Jiang, Tianyi Gao, Yafeng Yin, Meng Liu, Hua Yu, Zifeng Cheng, Qing Gu


Abstract
Automated Essay Scoring (AES) aims to score essays written in response to specific prompts. Many AES models have been proposed, but most of them are either prompt-specific or prompt-adaptive and cannot generalize well on “unseen” prompts. This work focuses on improving the generalization ability of AES models from the perspective of domain generalization, where the data of target prompts cannot be accessed during training. Specifically, we propose a prompt-aware neural AES model to extract comprehensive representation for essay scoring, including both prompt-invariant and prompt-specific features. To improve the generalization of representation, we further propose a novel disentangled representation learning framework. In this framework, a contrastive norm-angular alignment strategy and a counterfactual self-training strategy are designed to disentangle the prompt-invariant information and prompt-specific information in representation. Extensive experimental results on datasets of both ASAP and TOEFL11 demonstrate the effectiveness of our method under the domain generalization setting.
Anthology ID:
2023.acl-long.696
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12456–12470
Language:
URL:
https://aclanthology.org/2023.acl-long.696
DOI:
10.18653/v1/2023.acl-long.696
Bibkey:
Cite (ACL):
Zhiwei Jiang, Tianyi Gao, Yafeng Yin, Meng Liu, Hua Yu, Zifeng Cheng, and Qing Gu. 2023. Improving Domain Generalization for Prompt-Aware Essay Scoring via Disentangled Representation Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12456–12470, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Improving Domain Generalization for Prompt-Aware Essay Scoring via Disentangled Representation Learning (Jiang et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.696.pdf
Video:
 https://aclanthology.org/2023.acl-long.696.mp4