Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction

Ji Qi, Chuchun Zhang, Xiaozhi Wang, Kaisheng Zeng, Jifan Yu, Jinxin Liu, Lei Hou, Juanzi Li, Xu Bin


Abstract
The robustness to distribution changes ensures that NLP models can be successfully applied in the realistic world, especially for information extraction tasks. However, most prior evaluation benchmarks have been devoted to validating pairwise matching correctness, ignoring the crucial validation of robustness. In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously. We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique that consists of sentences with structured knowledge of the same meaning but with different syntactic and expressive forms. By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques. We perform experiments on typical models published in the last decade as well as a representative large language model, and the results show that the existing successful models exhibit a frustrating degradation, with a maximum drop of 23.43 F1 score. Our resources and code will be publicly available.
Anthology ID:
2023.emnlp-main.360
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5876–5890
Language:
URL:
https://aclanthology.org/2023.emnlp-main.360
DOI:
10.18653/v1/2023.emnlp-main.360
Bibkey:
Cite (ACL):
Ji Qi, Chuchun Zhang, Xiaozhi Wang, Kaisheng Zeng, Jifan Yu, Jinxin Liu, Lei Hou, Juanzi Li, and Xu Bin. 2023. Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5876–5890, Singapore. Association for Computational Linguistics.
Cite (Informal):
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction (Qi et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.360.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.360.mp4