Large Language Models respond to Influence like Humans

Lewis Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly Mai, Maria Do Mar Vau, Matthew Caldwell, Augustine Mavor-Parker


Abstract
Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement boosts a later truthfulness test rating. Analysis of newly collected data from human and LLM-simulated subjects (1000 of each) showed the same pattern of effects in both populations; although with greater per statement variability for the LLM. The second study concerns a specific mode of influence – populist framing of news to increase its persuasion and political mobilization. Newly collected data from simulated subjects was compared to previously published data from a 15 country experiment on 7286 human participants. Several effects from the human study were replicated by the simulated study, including ones that surprised the authors of the human study by contradicting their theoretical expectations; but some significant relationships found in human data were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.
Anthology ID:
2023.sicon-1.3
Volume:
Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Kushal Chawla, Weiyan Shi
Venue:
SICon
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15–24
Language:
URL:
https://aclanthology.org/2023.sicon-1.3
DOI:
10.18653/v1/2023.sicon-1.3
Bibkey:
Cite (ACL):
Lewis Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly Mai, Maria Do Mar Vau, Matthew Caldwell, and Augustine Mavor-Parker. 2023. Large Language Models respond to Influence like Humans. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023), pages 15–24, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Large Language Models respond to Influence like Humans (Griffin et al., SICon 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sicon-1.3.pdf