Chain of Thought Prompting Elicits Knowledge Augmentation

Dingjun Wu, Jing Zhang, Xinmei Huang


Abstract
The knowledge-augmented deep learning paradigm refers to a paradigm in which domain knowledge is identified and integrated into deep models. Conventional methods typically employ task-specific approaches to gather external knowledge from various sources. In contrast, large language models are extensively pre-trained and can serve as a comprehensive source of external knowledge. In this paper, we propose CoT-KA, a Chain-of-Thought-based method that augments knowledge for deep learning. CoT-KA avoids the need for additional knowledge retrieval or knowledge reasoning models, as required in conventional augmentation methods. Our results demonstrate that CoT-KA outperforms both pure CoT-based methods and the non-augmented method across the majority of eleven publicly available benchmarks for various reasoning tasks.
Anthology ID:
2023.findings-acl.408
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6519–6534
Language:
URL:
https://aclanthology.org/2023.findings-acl.408
DOI:
10.18653/v1/2023.findings-acl.408
Bibkey:
Cite (ACL):
Dingjun Wu, Jing Zhang, and Xinmei Huang. 2023. Chain of Thought Prompting Elicits Knowledge Augmentation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6519–6534, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Chain of Thought Prompting Elicits Knowledge Augmentation (Wu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.408.pdf