S. Magalí López Cortez


2023

pdf bib
The distribution of discourse relations within and across turns in spontaneous conversation
S. Magalí López Cortez | Cassandra L. Jacobs
Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)

Time pressure and topic negotiation may impose constraints on how people leverage discourse relations (DRs) in spontaneous conversational contexts. In this work, we adapt a system of DRs for written language to spontaneous dialogue using crowdsourced annotations from novice annotators. We then test whether discourse relations are used differently across several types of multi-utterance contexts. We compare the patterns of DR annotation within and across speakers and within and across turns. Ultimately, we find that different discourse contexts produce distinct distributions of discourse relations, with single-turn annotations creating the most uncertainty for annotators. Additionally, we find that the discourse relation annotations are of sufficient quality to predict from embeddings of discourse units.

pdf bib
Incorporating Annotator Uncertainty into Representations of Discourse Relations
S. Magalí López Cortez | Cassandra L. Jacobs
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Annotation of discourse relations is a known difficult task, especially for non-expert annotators. In this paper, we investigate novice annotators’ uncertainty on the annotation of discourse relations on spoken conversational data. We find that dialogue context (single turn, pair of turns within speaker, and pair of turns across speakers) is a significant predictor of confidence scores. We compute distributed representations of discourse relations from co-occurrence statistics that incorporate information about confidence scores and dialogue context. We perform a hierarchical clustering analysis using these representations and show that weighting discourse relation representations with information about confidence and dialogue context coherently models our annotators’ uncertainty about discourse relation labels.