Are NLP Models Good at Tracing Thoughts: An Overview of Narrative Understanding

Lixing Zhu, Runcong Zhao, Lin Gui, Yulan He


Abstract
Narrative understanding involves capturing the author’s cognitive processes, providing insights into their knowledge, intentions, beliefs, and desires. Although large language models (LLMs) excel in generating grammatically coherent text, their ability to comprehend the author’s thoughts remains uncertain. This limitation hinders the practical applications of narrative understanding. In this paper, we conduct a comprehensive survey of narrative understanding tasks, thoroughly examining their key features, definitions, taxonomy, associated datasets, training objectives, evaluation metrics, and limitations. Furthermore, we explore the potential of expanding the capabilities of modularized LLMs to address novel narrative understanding tasks. By framing narrative understanding as the retrieval of the author’s imaginative cues that outline the narrative structure, our study introduces a fresh perspective on enhancing narrative comprehension.
Anthology ID:
2023.findings-emnlp.677
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10098–10121
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.677
DOI:
10.18653/v1/2023.findings-emnlp.677
Bibkey:
Cite (ACL):
Lixing Zhu, Runcong Zhao, Lin Gui, and Yulan He. 2023. Are NLP Models Good at Tracing Thoughts: An Overview of Narrative Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10098–10121, Singapore. Association for Computational Linguistics.
Cite (Informal):
Are NLP Models Good at Tracing Thoughts: An Overview of Narrative Understanding (Zhu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.677.pdf