Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency

Lingfeng Shen, Weiting Tan, Boyuan Zheng, Daniel Khashabi


Abstract
With growing capabilities of large language models, prompting them has become the dominant way to access them. This has motivated the development of strategies for automatically selecting effective language prompts. In this paper, we introduce **pFlat** (prompt flatness), a new metric to quantify the expected utility of a language prompt. This metric is inspired by *flatness* regularization in statistical learning that quantifies the robustness of the model towards its parameter perturbations. We provide theoretical foundations for this metric and its relationship with other prompt selection metrics, providing a comprehensive understanding of existing methods. Empirically, we show that combining **pFlat** with existing metrics improves both performance and sample efficiency. Our metric outperforms the previous prompt selection metrics with an average increase of 10% in Pearson correlation across 6 classification benchmarks, and the prompt selected by our metric gains 5% higher accuracy than previous metrics across the benchmarks.
Anthology ID:
2023.findings-emnlp.523
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7795–7817
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.523
DOI:
10.18653/v1/2023.findings-emnlp.523
Bibkey:
Cite (ACL):
Lingfeng Shen, Weiting Tan, Boyuan Zheng, and Daniel Khashabi. 2023. Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7795–7817, Singapore. Association for Computational Linguistics.
Cite (Informal):
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency (Shen et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.523.pdf