Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations

Yiru Li, Huiyuan Lai, Antonio Toral, Malvina Nissim


Abstract
We reproduced the human-based evaluation of the continuation of narratives task presented by Chakrabarty et al. (2022). This experiment is performed as part of the ReproNLP Shared Task on Reproducibility of Evaluations in NLP (Track C). Our main goal is to reproduce the original study under conditions as similar as possible. Specifically, we follow the original experimental design and perform human evaluations of the data from the original study, while describing the differences between the two studies. We then present the results of these two studies together with an analysis of similarities between them. Inter-annotator agreement (Krippendorff’s alpha) in the reproduction study is lower than in the original study, while the human evaluation results of both studies have the same trends, that is, our results support the findings in the original study.
Anthology ID:
2023.humeval-1.15
Volume:
Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Anya Belz, Maja Popović, Ehud Reiter, Craig Thomson, João Sedoc
Venues:
HumEval | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
190–203
Language:
URL:
https://aclanthology.org/2023.humeval-1.15
DOI:
Bibkey:
Cite (ACL):
Yiru Li, Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2023. Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations. In Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems, pages 190–203, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations (Li et al., HumEval-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.humeval-1.15.pdf