Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning

Ruosen Li, Xinya Du


Abstract
Neural models, including large language models (LLMs), achieve superior performance on multi-hop question-answering. To elicit reasoning capabilities from LLMs, recent works propose using the chain-of-thought (CoT) mechanism to generate both the reasoning chain and the answer, which enhances the model’s capabilities in conducting multi-hop reasoning. However, several challenges still remain: such as struggling with inaccurate reasoning, hallucinations, and lack of interpretability. On the other hand, information extraction (IE) identifies entities, relations, and events grounded to the text. The extracted structured information can be easily interpreted by humans and machines (Grishman, 2019). In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering, especially the reasoning process. Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets. Moreover, the extracted structures themselves naturally provide grounded explanations that are preferred by humans, as compared to the generated reasoning chains and saliency-based explanations.
Anthology ID:
2023.findings-emnlp.452
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6779–6789
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.452
DOI:
10.18653/v1/2023.findings-emnlp.452
Bibkey:
Cite (ACL):
Ruosen Li and Xinya Du. 2023. Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6779–6789, Singapore. Association for Computational Linguistics.
Cite (Informal):
Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning (Li & Du, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.452.pdf