Using Wikidata for Enhancing Compositionality in Pretrained Language Models

Meriem Beloucif, Mihir Bansal, Chris Biemann


Abstract
One of the many advantages of pre-trained language models (PLMs) such as BERT and RoBERTa is their flexibility and contextual nature. These features give PLMs strong capabilities for representing lexical semantics. However, PLMs seem incapable of capturing high-level semantics in terms of compositionally. We show that when augmented with the relevant semantic knowledge, PMLs learn to capture a higher degree of lexical compositionality. We annotate a large dataset from Wikidata highlighting a type of semantic inference that is easy for humans to understand but difficult for PLMs, like the correlation between age and date of birth. We use this resource for finetuning DistilBERT, BERT large and RoBERTa. Our results show that the performance of PLMs against the test data continuously improves when augmented with such a rich resource. Our results are corroborated by a consistent improvement over most GLUE benchmark natural language understanding tasks.
Anthology ID:
2023.ranlp-1.19
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
170–178
Language:
URL:
https://aclanthology.org/2023.ranlp-1.19
DOI:
Bibkey:
Cite (ACL):
Meriem Beloucif, Mihir Bansal, and Chris Biemann. 2023. Using Wikidata for Enhancing Compositionality in Pretrained Language Models. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 170–178, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Using Wikidata for Enhancing Compositionality in Pretrained Language Models (Beloucif et al., RANLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ranlp-1.19.pdf