Towards Effective Long-Form QA with Evidence Augmentation

Mengxia Yu, Sara Rosenthal, Mihaela Bornea, Avi Sil


Abstract
In this study, we focus on the challenge of improving Long-form Question Answering (LFQA) by extracting and effectively utilizing knowledge from a large set of retrieved passages. We first demonstrate the importance of accurate evidence retrieval for LFQA, showing that optimal extracted knowledge from passages significantly benefits the generation. We also show that the choice of generative models impacts the system’s ability to leverage the evidence and produce answers that are grounded in the retrieved passages. We propose a Mixture of Experts (MoE) model as an alternative to the Fusion in Decoder (FiD) used in state-of-the-art LFQA systems and we compare these two models in our experiments.
Anthology ID:
2023.gem-1.13
Volume:
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, Hooman Sedghamiz
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
155–164
Language:
URL:
https://aclanthology.org/2023.gem-1.13
DOI:
Bibkey:
Cite (ACL):
Mengxia Yu, Sara Rosenthal, Mihaela Bornea, and Avi Sil. 2023. Towards Effective Long-Form QA with Evidence Augmentation. In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 155–164, Singapore. Association for Computational Linguistics.
Cite (Informal):
Towards Effective Long-Form QA with Evidence Augmentation (Yu et al., GEM-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.gem-1.13.pdf