Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data

Canwen Xu, Daya Guo, Nan Duan, Julian McAuley


Abstract
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly adopted across numerous domains. However, these models are only accessible through a restricted API, creating barriers for new research and progress in the field. We propose a pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-efficient tuning to enhance LLaMA, an open-source large language model. The resulting model, named Baize, demonstrates good performance in multi-turn dialogues with guardrails that minimize potential risks. Additionally, we propose a new technique called Self-Distill with Feedback, to further improve the performance of the Baize models with feedback from ChatGPT.
Anthology ID:
2023.emnlp-main.385
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6268–6278
Language:
URL:
https://aclanthology.org/2023.emnlp-main.385
DOI:
10.18653/v1/2023.emnlp-main.385
Bibkey:
Cite (ACL):
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6268–6278, Singapore. Association for Computational Linguistics.
Cite (Informal):
Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data (Xu et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.385.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.385.mp4