BMX: Boosting Natural Language Generation Metrics with Explainability

Christoph Leiter, Hoa Nguyen, Steffen Eger


Abstract
State-of-the-art natural language generation evaluation metrics are based on black-box language models. Hence, recent works consider their explainability with the goals of better understandability for humans and better metric analysis, including failure cases. In contrast, we explicitly leverage explanations to boost the metrics’ performance. In particular, we perceive feature importance explanations as word-level scores, which we convert, via power means, into a segment-level score. We then combine this segment-level score with the original metric to obtain a better metric. Our tests show improvements for multiple metrics across MT and summarization datasets. While improvements on machine translation are small, they are strong for summarization. Notably, BMX with the LIME explainer and preselected parameters achieves an average improvement of 0.087 points in Spearman correlation on the system-level evaluation of SummEval.
Anthology ID:
2024.findings-eacl.150
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2274–2288
Language:
URL:
https://aclanthology.org/2024.findings-eacl.150
DOI:
Bibkey:
Cite (ACL):
Christoph Leiter, Hoa Nguyen, and Steffen Eger. 2024. BMX: Boosting Natural Language Generation Metrics with Explainability. In Findings of the Association for Computational Linguistics: EACL 2024, pages 2274–2288, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
BMX: Boosting Natural Language Generation Metrics with Explainability (Leiter et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.150.pdf
Software:
 2024.findings-eacl.150.software.zip
Note:
 2024.findings-eacl.150.note.zip