PSST! Prosodic Speech Segmentation with Transformers

Nathan Roll, Calbert Graham, Simon Todd


Abstract
We develop and probe a model for detecting the boundaries of prosodic chunks in untranscribed conversational English speech. The model is obtained by fine-tuning a Transformer-based speech-to-text (STT) model to integrate the identification of Intonation Unit (IU) boundaries with the STT task. The model shows robust performance, both on held-out data and on out-of-distribution data representing different dialects and transcription protocols. By evaluating the model on degraded speech data, and comparing it with alternatives, we establish that it relies heavily on lexico-syntactic information inferred from audio, and not solely on acoustic information typically understood to cue prosodic structure. We release our model as both a transcription tool and a baseline for further improvements in prosodic segmentation.
Anthology ID:
2023.conll-1.31
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
476–487
Language:
URL:
https://aclanthology.org/2023.conll-1.31
DOI:
10.18653/v1/2023.conll-1.31
Bibkey:
Cite (ACL):
Nathan Roll, Calbert Graham, and Simon Todd. 2023. PSST! Prosodic Speech Segmentation with Transformers. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 476–487, Singapore. Association for Computational Linguistics.
Cite (Informal):
PSST! Prosodic Speech Segmentation with Transformers (Roll et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-1.31.pdf
Software:
 2023.conll-1.31.Software.zip