Translate-Train Embracing Translationese Artifacts

Sicheng Yu, Qianru Sun, Hao Zhang, Jing Jiang


Abstract
Translate-train is a general training approach to multilingual tasks. The key idea is to use the translator of the target language to generate training data to mitigate the gap between the source and target languages. However, its performance is often hampered by the artifacts in the translated texts (translationese). We discover that such artifacts have common patterns in different languages and can be modeled by deep learning, and subsequently propose an approach to conduct translate-train using Translationese Embracing the effect of Artifacts (TEA). TEA learns to mitigate such effect on the training data of a source language (whose original and translationese are both available), and applies the learned module to facilitate the inference on the target language. Extensive experiments on the multilingual QA dataset TyDiQA demonstrate that TEA outperforms strong baselines.
Anthology ID:
2022.acl-short.40
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
362–370
Language:
URL:
https://aclanthology.org/2022.acl-short.40
DOI:
10.18653/v1/2022.acl-short.40
Bibkey:
Cite (ACL):
Sicheng Yu, Qianru Sun, Hao Zhang, and Jing Jiang. 2022. Translate-Train Embracing Translationese Artifacts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 362–370, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Translate-Train Embracing Translationese Artifacts (Yu et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.40.pdf
Data
TyDiQA