PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer

Lichang Chen, Jiuhai Chen, Heng Huang, Minhao Cheng


Abstract
Recent studies show that prompt tuning can better leverage the power of large language models than fine-tuning on downstream natural language understanding tasks. However, the existing prompt tuning methods have training instability issues, as the variance of scores under different random seeds is quite large. To address this critical problem, we first investigate and find that the loss landscape of vanilla prompt tuning is precipitous when it is visualized, where a slight change of input data can cause a big fluctuation in the loss landscape. This is an essential factor that leads to the instability of prompt tuning. Based on this observation, we introduce perturbation-based regularizers, which can smooth the loss landscape, into prompt tuning. We propose a new algorithm, called Prompt Tuning with Perturbation-based regularizer (PTP), which can not only alleviate training instability dramatically but also boost the performance of prompt tuning. We design two kinds of perturbation-based regularizers, including random-noise-based and adversarial-based. In particular, our proposed perturbations are flexible on both text space and embedding space. Extensive experiments show the effectiveness of our proposed methods in stabilizing the training. Our new algorithms improve the state-of-the-art prompt tuning methods by 1.94% and 2.34% on SuperGLUE and FewGLUE benchmarks, respectively.
Anthology ID:
2023.emnlp-main.833
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13512–13525
Language:
URL:
https://aclanthology.org/2023.emnlp-main.833
DOI:
10.18653/v1/2023.emnlp-main.833
Bibkey:
Cite (ACL):
Lichang Chen, Jiuhai Chen, Heng Huang, and Minhao Cheng. 2023. PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13512–13525, Singapore. Association for Computational Linguistics.
Cite (Informal):
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer (Chen et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.833.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.833.mp4