EffEval: A Comprehensive Evaluation of Efficiency for MT Evaluation Metrics

Daniil Larionov, Jens Grünwald, Christoph Leiter, Steffen Eger


Abstract
Efficiency is a key property to foster inclusiveness and reduce environmental costs, especially in an era of LLMs. In this work, we provide a comprehensive evaluation of efficiency for MT evaluation metrics. Our approach involves replacing computation-intensive transformers with lighter alternatives and employing linear and quadratic approximations for alignment algorithms on top of LLM representations. We evaluate six (reference-free and reference-based) metrics across three MT datasets and examine 16 lightweight transformers. In addition, we look into the training efficiency of metrics like COMET by utilizing adapters. Our results indicate that (a) TinyBERT provides the optimal balance between quality and efficiency, (b) CPU speed-ups are more substantial than those on GPU; (c) WMD approximations yield no efficiency gains while reducing quality and (d) adapters enhance training efficiency (regarding backward pass speed and memory requirements) as well as, in some cases, metric quality. These findings can help to strike a balance between evaluation speed and quality, which is essential for effective NLG systems. Furthermore, our research contributes to the ongoing efforts to optimize NLG evaluation metrics with minimal impact on performance. To our knowledge, ours is the most comprehensive analysis of different aspects of efficiency for MT metrics conducted so far.
Anthology ID:
2023.findings-emnlp.7
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–96
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.7
DOI:
10.18653/v1/2023.findings-emnlp.7
Bibkey:
Cite (ACL):
Daniil Larionov, Jens Grünwald, Christoph Leiter, and Steffen Eger. 2023. EffEval: A Comprehensive Evaluation of Efficiency for MT Evaluation Metrics. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 78–96, Singapore. Association for Computational Linguistics.
Cite (Informal):
EffEval: A Comprehensive Evaluation of Efficiency for MT Evaluation Metrics (Larionov et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.7.pdf