The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages

Chiyu Zhang, Khai Doan, Qisheng Liao, Muhammad Abdul-Mageed


Abstract
Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROW
Anthology ID:
2023.emnlp-main.160
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2630–2662
Language:
URL:
https://aclanthology.org/2023.emnlp-main.160
DOI:
10.18653/v1/2023.emnlp-main.160
Bibkey:
Cite (ACL):
Chiyu Zhang, Khai Doan, Qisheng Liao, and Muhammad Abdul-Mageed. 2023. The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2630–2662, Singapore. Association for Computational Linguistics.
Cite (Informal):
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages (Zhang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.160.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.160.mp4