Comparing Approaches to Language Understanding for Human-Robot Dialogue: An Error Taxonomy and Analysis

Ada Tur, David Traum


Abstract
In this paper, we compare two different approaches to language understanding for a human-robot interaction domain in which a human commander gives navigation instructions to a robot. We contrast a relevance-based classifier with a GPT-2 model, using about 2000 input-output examples as training data. With this level of training data, the relevance-based model outperforms the GPT-2 based model 79% to 8%. We also present a taxonomy of types of errors made by each model, indicating that they have somewhat different strengths and weaknesses, so we also examine the potential for a combined model.
Anthology ID:
2022.lrec-1.625
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5813–5820
Language:
URL:
https://aclanthology.org/2022.lrec-1.625
DOI:
Bibkey:
Cite (ACL):
Ada Tur and David Traum. 2022. Comparing Approaches to Language Understanding for Human-Robot Dialogue: An Error Taxonomy and Analysis. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5813–5820, Marseille, France. European Language Resources Association.
Cite (Informal):
Comparing Approaches to Language Understanding for Human-Robot Dialogue: An Error Taxonomy and Analysis (Tur & Traum, LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.625.pdf
Data
R2R