Supporting Context Monotonicity Abstractions in Neural NLI Models

Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, André Freitas


Abstract
Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity. For a certain class of NLI problems where the resulting entailment label depends only on the context monotonicity and the relation between the substituted concepts, we build on previous techniques that aim to improve the performance of NLI models for these problems, as consistent performance across both upward and downward monotone contexts still seems difficult to attain even for state of the art models. To this end, we reframe the problem of context monotonicity classification to make it compatible with transformer-based pre-trained NLI models and add this task to the training pipeline. Furthermore, we introduce a sound and complete simplified monotonicity logic formalism which describes our treatment of contexts as abstract units. Using the notions in our formalism, we adapt targeted challenge sets to investigate whether an intermediate context monotonicity classification task can aid NLI models’ performance on examples exhibiting monotonicity reasoning.
Anthology ID:
2021.naloma-1.6
Volume:
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)
Month:
June
Year:
2021
Address:
Groningen, the Netherlands (online)
Editors:
Aikaterini-Lida Kalouli, Lawrence S. Moss
Venue:
NALOMA
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
41–50
Language:
URL:
https://aclanthology.org/2021.naloma-1.6
DOI:
Bibkey:
Cite (ACL):
Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, and André Freitas. 2021. Supporting Context Monotonicity Abstractions in Neural NLI Models. In Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA), pages 41–50, Groningen, the Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Supporting Context Monotonicity Abstractions in Neural NLI Models (Rozanova et al., NALOMA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naloma-1.6.pdf
Data
HELPMultiNLISNLI