Adversarial Multi-task Learning for End-to-end Metaphor Detection

Shenglong Zhang, Ying Liu


Abstract
Metaphor detection (MD) suffers from limited training data. In this paper, we started with a linguistic rule called Metaphor Identification Procedure and then proposed a novel multi-task learning framework to transfer knowledge in basic sense discrimination (BSD) to MD. BSD is constructed from word sense disambiguation (WSD), which has copious amounts of data. We leverage adversarial training to align the data distributions of MD and BSD in the same feature space, so task-invariant representations can be learned. To capture fine-grained alignment patterns, we utilize the multi-mode structures of MD and BSD. Our method is totally end-to-end and can mitigate the data scarcity problem in MD. Competitive results are reported on four public datasets. Our code and datasets are available.
Anthology ID:
2023.findings-acl.96
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1483–1497
Language:
URL:
https://aclanthology.org/2023.findings-acl.96
DOI:
10.18653/v1/2023.findings-acl.96
Bibkey:
Cite (ACL):
Shenglong Zhang and Ying Liu. 2023. Adversarial Multi-task Learning for End-to-end Metaphor Detection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1483–1497, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Adversarial Multi-task Learning for End-to-end Metaphor Detection (Zhang & Liu, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.96.pdf
Video:
 https://aclanthology.org/2023.findings-acl.96.mp4