CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models

Benjamin Minixhofer, Jonas Pfeiffer, Ivan Vulić


Abstract
While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large number of languages. In this work, we systematically study decompounding, the task of splitting compound words into their constituents, at a wide scale. We first address the data gap by introducing a dataset of 255k compound and non-compound words across 56 diverse languages obtained from Wiktionary. We then use this dataset to evaluate an array of Large Language Models (LLMs) on the decompounding task. We find that LLMs perform poorly, especially on words which are tokenized unfavorably by subword tokenization. We thus introduce a novel methodology to train dedicated models for decompounding. The proposed two-stage procedure relies on a fully self-supervised objective in the first stage, while the second, supervised learning stage optionally fine-tunes the model on the annotated Wiktionary data. Our self-supervised models outperform the prior best unsupervised decompounding models by 13.9% accuracy on average. Our fine-tuned models outperform all prior (language-specific) decompounding tools. Furthermore, we use our models to leverage decompounding during the creation of a subword tokenizer, which we refer to as CompoundPiece. CompoundPiece tokenizes compound words more favorably on average, leading to improved performance on decompounding over an otherwise equivalent model using SentencePiece tokenization.
Anthology ID:
2023.emnlp-main.24
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
343–359
Language:
URL:
https://aclanthology.org/2023.emnlp-main.24
DOI:
10.18653/v1/2023.emnlp-main.24
Bibkey:
Cite (ACL):
Benjamin Minixhofer, Jonas Pfeiffer, and Ivan Vulić. 2023. CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 343–359, Singapore. Association for Computational Linguistics.
Cite (Informal):
CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models (Minixhofer et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.24.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.24.mp4