Reza Ghaeini


2020

pdf bib
Relation Extraction with Explanation
Hamed Shahbazi | Xiaoli Fern | Reza Ghaeini | Prasad Tadepalli
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explanability. In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models. We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation. We also propose to automatically generate “distractor” sentences to augment the bags and train the model to ignore the distractors. Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explanability.

2019

pdf bib
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini | Xiaoli Fern | Hamed Shahbazi | Prasad Tadepalli
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model’s behaviour and predictions, which are helpful for assessing the reliability of the model’s predictions. However, such methods do not improve the model’s reliability. In this paper, we aim to teach the model to make the right prediction for the right reason by providing explanation training and ensuring the alignment of the model’s explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.

2018

pdf bib
Joint Neural Entity Disambiguation with Output Space Search
Hamed Shahbazi | Xiaoli Fern | Reza Ghaeini | Chao Ma | Rasha Mohammad Obeidat | Prasad Tadepalli
Proceedings of the 27th International Conference on Computational Linguistics

In this paper, we present a novel model for entity disambiguation that combines both local contextual information and global evidences through Limited Discrepancy Search (LDS). Given an input document, we start from a complete solution constructed by a local model and conduct a search in the space of possible corrections to improve the local solution from a global view point. Our search utilizes a heuristic function to focus more on the least confident local decisions and a pruning function to score the global solutions based on their local fitness and the global coherences among the predicted entities. Experimental results on CoNLL 2003 and TAC 2010 benchmarks verify the effectiveness of our model.

pdf bib
Dependent Gated Reading for Cloze-Style Question Answering
Reza Ghaeini | Xiaoli Fern | Hamed Shahbazi | Prasad Tadepalli
Proceedings of the 27th International Conference on Computational Linguistics

We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children’s Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.

pdf bib
Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference
Reza Ghaeini | Xiaoli Fern | Prasad Tadepalli
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining such deep learning based models through a case study on a popular neural model for NLI. In particular, we propose to interpret the intermediate layers of NLI models by visualizing the saliency of attention and LSTM gating signals. We present several examples for which our methods are able to reveal interesting insights and identify the critical information contributing to the model decisions.

pdf bib
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
Reza Ghaeini | Sadid A. Hasan | Vivek Datla | Joey Liu | Kathy Lee | Ashequl Qadir | Yuan Ling | Aaditya Prakash | Xiaoli Fern | Oladimeji Farri
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.

2016

pdf bib
Event Nugget Detection with Forward-Backward Recurrent Neural Networks
Reza Ghaeini | Xiaoli Fern | Liang Huang | Prasad Tadepalli
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)