Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal

Yan Cong, Emmanuele Chersoni, Yu-yin Hsu, Alessandro Lenci


Abstract
In psycholinguistics, semantic attraction is a sentence processing phenomenon in which a given argument violates the selectional requirements of a verb, but this violation is not perceived by comprehenders due to its attraction to another noun in the same sentence, which is syntactically unrelated but semantically sound. In our study, we use autoregressive language models to compute the sentence-level and the target phrase-level Surprisal scores of a psycholinguistic dataset on semantic attraction. Our results show that the models are sensitive to semantic attraction, leading to reduced Surprisal scores, although none of them perfectly matches the human behavioral pattern.
Anthology ID:
2023.starsem-1.13
Volume:
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Alexis Palmer, Jose Camacho-collados
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–148
Language:
URL:
https://aclanthology.org/2023.starsem-1.13
DOI:
10.18653/v1/2023.starsem-1.13
Bibkey:
Cite (ACL):
Yan Cong, Emmanuele Chersoni, Yu-yin Hsu, and Alessandro Lenci. 2023. Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 141–148, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal (Cong et al., *SEM 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.starsem-1.13.pdf