Saurabh Srivastava


2023

pdf bib
MailEx: Email Event and Argument Extraction
Saurabh Srivastava | Gaurav Singh | Shou Matsumoto | Ali Raz | Paulo Costa | Joshua Poore | Ziyu Yao
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

In this work, we present the first dataset, MailEx, for performing event extraction from conversational email threads. To this end, we first proposed a new taxonomy covering 10 event types and 76 arguments in the email domain. Our final dataset includes 1.5K email threads and ~4K emails, which are annotated with a total of ~8K event instances. To understand the task challenges, we conducted a series of experiments comparing three types of approaches, i.e., fine-tuned sequence labeling, fine-tuned generative extraction, and few-shot in-context learning. Our results showed that the task of email event extraction is far from being addressed, due to challenges lying in, e.g., extracting non-continuous, shared trigger spans, extracting non-named entity arguments, and modeling the email conversational history. Our work thus suggests more future investigations in this domain-specific event extraction task.

pdf bib
Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Hao Yan | Saurabh Srivastava | Yintao Tai | Sida I. Wang | Wen-tau Yih | Ziyu Yao
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Interactive semantic parsing based on natural language (NL) feedback, where users provide feedback to correct the parser mistakes, has emerged as a more practical scenario than the traditional one-shot semantic parsing. However, prior work has heavily relied on human-annotated feedback data to train the interactive semantic parser, which is prohibitively expensive and not scalable. In this work, we propose a new task of simulating NL feedback for interactive semantic parsing. We accompany the task with a novel feedback evaluator. The evaluator is specifically designed to assess the quality of the simulated feedback, based on which we decide the best feedback simulator from our proposed variants. On a text-to-SQL dataset, we show that our feedback simulator can generate high-quality NL feedback to boost the error correction ability of a specific parser. In low-data settings, our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.

2021

pdf bib
Complex Question Answering on knowledge graphs using machine translation and multi-task learning
Saurabh Srivastava | Mayur Patidar | Sudip Chowdhury | Puneet Agarwal | Indrajit Bhattacharya | Gautam Shroff
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Question answering (QA) over a knowledge graph (KG) is a task of answering a natural language (NL) query using the information stored in KG. In a real-world industrial setting, this involves addressing multiple challenges including entity linking, multi-hop reasoning over KG, etc. Traditional approaches handle these challenges in a modularized sequential manner where errors in one module lead to the accumulation of errors in downstream modules. Often these challenges are inter-related and the solutions to them can reinforce each other when handled simultaneously in an end-to-end learning setup. To this end, we propose a multi-task BERT based Neural Machine Translation (NMT) model to address these challenges. Through experimental analysis, we demonstrate the efficacy of our proposed approach on one publicly available and one proprietary dataset.

2020

pdf bib
A Novel Hierarchical BERT Architecture for Sarcasm Detection
Himani Srivastava | Vaibhav Varshney | Surabhi Kumari | Saurabh Srivastava
Proceedings of the Second Workshop on Figurative Language Processing

Online discussion platforms are often flooded with opinions from users across the world on a variety of topics. Many such posts, comments, or utterances are often sarcastic in nature, i.e., the actual intent is hidden in the sentence and is different from its literal meaning, making the detection of such utterances challenging without additional context. In this paper, we propose a novel deep learning-based approach to detect whether an utterance is sarcastic or non-sarcastic by utilizing the given contexts ina hierarchical manner. We have used datasets from two online discussion platforms - Twitter and Reddit1for our experiments. Experimental and error analysis shows that the hierarchical models can make full use of history to obtain a better representation of contexts and thus, in turn, can outperform their sequential counterparts.

2019

pdf bib
Detecting Aggression and Toxicity using a Multi Dimension Capsule Network
Saurabh Srivastava | Prerna Khurana
Proceedings of the Third Workshop on Abusive Language Online

In the era of social media, hate speech, trolling and verbal abuse have become a common issue. We present an approach to automatically classify such statements, using a new deep learning architecture. Our model comprises of a Multi Dimension Capsule Network that generates the representation of sentences which we use for classification. We further provide an analysis of our model’s interpretation of such statements. We compare the results of our model with state-of-art classification algorithms and demonstrate our model’s ability. It also has the capability to handle comments that are written in both Hindi and English, which are provided in the TRAC dataset. We also compare results on Kaggle’s Toxic comment classification dataset.

2018

pdf bib
Identifying Aggression and Toxicity in Comments using Capsule Network
Saurabh Srivastava | Prerna Khurana | Vartika Tewari
Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)

Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today’s time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggle-toxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.

2017

pdf bib
Experiments with Domain Dependent Dialogue Act Classification using Open-Domain Dialogue Corpora
Swapnil Hingmire | Apoorv Shrivastava | Girish Palshikar | Saurabh Srivastava
Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)