Curating Datasets for Better Performance with Example Training Dynamics

Aviad Sar-Shalom, Roy Schwartz


Abstract
The landscape of NLP research is dominated by large-scale models training on colossal datasets, relying on data quantity rather than quality. As an alternative to this landscape, we propose a method for weighing the relative importance of examples in a dataset based on their Example Training dynamics (swayamdipta et al., 2020) — a set of metrics computed during training. We propose a new way of computing the ETD of a dataset, and show that they can be used to improve performance in both in-distribution and out-of-distribution testing. We show that ETD can be transferable, i.e., they can be computed once and used for training different models, effectively reducing their computation cost. Finally, we suggest an active learning approach for computing ETD during training rather than as a preprocessing step — an approach that is not as effective, but dramatically reduces the extra computational costs.
Anthology ID:
2023.findings-acl.674
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10597–10608
Language:
URL:
https://aclanthology.org/2023.findings-acl.674
DOI:
10.18653/v1/2023.findings-acl.674
Bibkey:
Cite (ACL):
Aviad Sar-Shalom and Roy Schwartz. 2023. Curating Datasets for Better Performance with Example Training Dynamics. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10597–10608, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Curating Datasets for Better Performance with Example Training Dynamics (Sar-Shalom & Schwartz, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.674.pdf