RobustLR: A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners

Soumya Sanyal, Zeyi Liao, Xiang Ren


Abstract
Transformers have been shown to be able to perform deductive reasoning on inputs containing rules and statements written in the English natural language. However, it is unclear if these models indeed follow rigorous logical reasoning to arrive at the prediction or rely on spurious correlation patterns in making decisions. A strong deductive reasoning model should consistently understand the semantics of different logical operators. To this end, we present RobustLR, a diagnostic benchmark that evaluates the robustness of language models to minimal logical edits in the inputs and different logical equivalence conditions. In our experiments with RoBERTa, T5, and GPT3 we show that the models trained on deductive reasoning datasets do not perform consistently on the RobustLR test set, thus showing that the models are not robust to our proposed logical perturbations. Further, we observe that the models find it especially hard to learn logical negation operators. Our results demonstrate the shortcomings of current language models in logical reasoning and call for the development of better inductive biases to teach the logical semantics to language models. All the datasets and code base have been made publicly available.
Anthology ID:
2022.emnlp-main.653
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9614–9631
Language:
URL:
https://aclanthology.org/2022.emnlp-main.653
DOI:
10.18653/v1/2022.emnlp-main.653
Bibkey:
Cite (ACL):
Soumya Sanyal, Zeyi Liao, and Xiang Ren. 2022. RobustLR: A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9614–9631, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
RobustLR: A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners (Sanyal et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.653.pdf