INFORM : Information eNtropy based multi-step reasoning FOR large language Models

Chuyue Zhou, Wangjie You, Juntao Li, Jing Ye, Kehai Chen, Min Zhang


Abstract
Large language models (LLMs) have demonstrated exceptional performance in reasoning tasks with dedicated Chain-of-Thought (CoT) prompts. Further enhancing CoT prompts with exquisite exemplars can significantly improve reasoning performance.However, the effectiveness of CoT prompts may fluctuate dramatically with different choices of in-context examples. Additionally, manual construction of rationale steps can be time-consuming, presenting challenges for the widespread adoption of CoT prompting. In this work, we propose a novel approach by introducing information entropy (IE) as a criteria on for CoT prompt selection. We extend this criterion to the CoT generation and inference stages, automatically generating CoT prompts with higher information entropy scores and adaptively determining the number of samples. These three stages together form our proposed information- entropy-based multi-step reasoning for large language models, named INFORM. Our experiments across seven reasoning benchmarks utilizing two language models(GPT-3.5-Turbo and text-davinci-003) demonstrate the superiority of INFORM both in performance and efficiency.
Anthology ID:
2023.emnlp-main.216
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3565–3576
Language:
URL:
https://aclanthology.org/2023.emnlp-main.216
DOI:
10.18653/v1/2023.emnlp-main.216
Bibkey:
Cite (ACL):
Chuyue Zhou, Wangjie You, Juntao Li, Jing Ye, Kehai Chen, and Min Zhang. 2023. INFORM : Information eNtropy based multi-step reasoning FOR large language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3565–3576, Singapore. Association for Computational Linguistics.
Cite (Informal):
INFORM : Information eNtropy based multi-step reasoning FOR large language Models (Zhou et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.216.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.216.mp4