MENLI: Robust Evaluation Metrics from Natural Language Inference

Yanran Chen, Steffen Eger


Abstract
Recently proposed BERT-based evaluation metrics for text generation perform well on standard benchmarks but are vulnerable to adversarial attacks, e.g., relating to information correctness. We argue that this stems (in part) from the fact that they are models of semantic similarity. In contrast, we develop evaluation metrics based on Natural Language Inference (NLI), which we deem a more appropriate modeling. We design a preference-based adversarial attack framework and show that our NLI based metrics are much more robust to the attacks than the recent BERT-based metrics. On standard benchmarks, our NLI based metrics outperform existing summarization metrics, but perform below SOTA MT metrics. However, when combining existing metrics with our NLI metrics, we obtain both higher adversarial robustness (15%–30%) and higher quality metrics as measured on standard benchmarks (+5% to 30%).
Anthology ID:
2023.tacl-1.47
Volume:
Transactions of the Association for Computational Linguistics, Volume 11
Month:
Year:
2023
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
804–825
Language:
URL:
https://aclanthology.org/2023.tacl-1.47
DOI:
10.1162/tacl_a_00576
Bibkey:
Cite (ACL):
Yanran Chen and Steffen Eger. 2023. MENLI: Robust Evaluation Metrics from Natural Language Inference. Transactions of the Association for Computational Linguistics, 11:804–825.
Cite (Informal):
MENLI: Robust Evaluation Metrics from Natural Language Inference (Chen & Eger, TACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.tacl-1.47.pdf
Video:
 https://aclanthology.org/2023.tacl-1.47.mp4