Dr ChatGPT tell me what I want to hear: How different prompts impact health answer correctness

Bevan Koopman, Guido Zuccon


Abstract
This paper investigates the significant impact different prompts have on the behaviour of ChatGPT when used for health information seeking. As people more and more depend on generative large language models (LLMs) like ChatGPT, it is critical to understand model behaviour under different conditions, especially for domains where incorrect answers can have serious consequences such as health. Using the TREC Misinformation dataset, we empirically evaluate ChatGPT to show not just its effectiveness but reveal that knowledge passed in the prompt can bias the model to the detriment of answer correctness. We show this occurs both for retrieve-then-generate pipelines and based on how a user phrases their question as well as the question type. This work has important implications for the development of more robust and transparent question-answering systems based on generative large language models. Prompts, raw result files and manual analysis are made publicly available at https://github.com/ielab/drchatgpt-health_prompting.
Anthology ID:
2023.emnlp-main.928
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15012–15022
Language:
URL:
https://aclanthology.org/2023.emnlp-main.928
DOI:
10.18653/v1/2023.emnlp-main.928
Bibkey:
Cite (ACL):
Bevan Koopman and Guido Zuccon. 2023. Dr ChatGPT tell me what I want to hear: How different prompts impact health answer correctness. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15012–15022, Singapore. Association for Computational Linguistics.
Cite (Informal):
Dr ChatGPT tell me what I want to hear: How different prompts impact health answer correctness (Koopman & Zuccon, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.928.pdf