Rediet Abebe


2023

pdf bib
When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Eve Fleisig | Rediet Abebe | Dan Klein
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Though majority vote among annotators is typically used for ground truth labels in machine learning, annotator disagreement in tasks such as hate speech detection may reflect systematic differences in opinion across groups, not noise. Thus, a crucial problem in hate speech detection is determining if a statement is offensive to the demographic group that it targets, when that group may be a small fraction of the annotator pool. We construct a model that predicts individual annotator ratings on potentially offensive text and combines this information with the predicted target group of the text to predict the ratings of target group members. We show gains across a range of metrics, including raising performance over the baseline by 22% at predicting individual annotators’ ratings and by 33% at predicting variance among annotators, which provides a metric for model uncertainty downstream. We find that annotators’ ratings can be predicted using their demographic information as well as opinions on online content, and that non-invasive questions on annotators’ online experiences minimize the need to collect demographic information when predicting annotators’ opinions.
Search
Co-authors
Venues