Rashmi Gangadharaiah


2023

pdf bib
Contextual Dynamic Prompting for Response Generation in Task-oriented Dialog Systems
Sandesh Swamy | Narges Tabari | Chacha Chen | Rashmi Gangadharaiah
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Response generation is one of the critical components in task-oriented dialog systems. Existing studies have shown that large pre-trained language models can be adapted to this task. The typical paradigm of adapting such extremely large language models would be by fine-tuning on the downstream tasks which is not only time-consuming but also involves significant resources and access to fine-tuning data. Prompting (Schick and Schütze, 2020) has been an alternative to fine-tuning in many NLP tasks. In our work, we explore the idea of using prompting for response generation in task-oriented dialog systems. Specifically, we propose an approach that performs contextual dynamic prompting where the prompts are learnt from dialog contexts. We aim to distill useful prompting signals from the dialog context. On experiments with MultiWOZ 2.2 dataset (Zang et al., 2020), we show that contextual dynamic prompts improve response generation in terms of combined score (Mehri et al., 2019) by 3 absolute points, and an additional 17 points when dialog states are incorporated. Furthermore, we carried out human annotation on these conversations and found that agents which incorporate context are preferred over agents with vanilla prefix-tuning.

pdf bib
GrailQA++: A Challenging Zero-Shot Benchmark for Knowledge Base Question Answering
Ritam Dutt | Sopan Khosla | Vinayshekhar Bannihatti Kumar | Rashmi Gangadharaiah
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
PhraseSumm: Abstractive Short Phrase Summarization
Kasturi Bhattacharjee | Kathleen McKeown | Rashmi Gangadharaiah
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)

pdf bib
Privacy Adhering Machine Un-learning in NLP
Vinayshekhar Bannihatti Kumar | Rashmi Gangadharaiah | Dan Roth
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)

pdf bib
Exploring the Reasons for Non-generalizability of KBQA systems
Sopan Khosla | Ritam Dutt | Vinayshekhar Bannihatti Kumar | Rashmi Gangadharaiah
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP

Recent research has demonstrated impressive generalization capabilities of several Knowledge Base Question Answering (KBQA) models on the GrailQA dataset. We inspect whether these models can generalize to other datasets in a zero-shot setting. We notice a significant drop in performance and investigate the causes for the same. We observe that the models are dependent not only on the structural complexity of the questions, but also on the linguistic styles of framing a question. Specifically, the linguistic dimensions corresponding to explicitness, readability, coherence, and grammaticality have a significant impact on the performance of state-of-the-art KBQA models. Overall our results showcase the brittleness of such models and the need for creating generalizable systems.

2022

pdf bib
Benchmarking the Covariate Shift Robustness of Open-world Intent Classification Approaches
Sopan Khosla | Rashmi Gangadharaiah
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Task-oriented dialog systems deployed in real-world applications are often challenged by out-of-distribution queries. These systems should not only reliably detect utterances with unsupported intents (semantic shift), but also generalize to covariate shift (supported intents from unseen distributions). However, none of the existing benchmarks for open-world intent classification focus on the second aspect, thus only performing a partial evaluation of intent detection techniques. In this work, we propose two new datasets ( and ) that include utterances useful for evaluating the robustness of open-world models to covariate shift. Along with the i.i.d. test set, both datasets contain a new cov-test set that, along with out-of-scope utterances, contains in-scope utterances sampled from different distributions not seen during training. This setting better mimics the challenges faced in real-world applications. Evaluating several open-world classifiers on the new datasets reveals that models that perform well on the test set struggle to generalize to the cov-test. Our datasets fill an important gap in the field, offering a more realistic evaluation scenario for intent classification in task-oriented dialog systems.

pdf bib
Evaluating the Practical Utility of Confidence-score based Techniques for Unsupervised Open-world Classification
Sopan Khosla | Rashmi Gangadharaiah
Proceedings of the Third Workshop on Insights from Negative Results in NLP

Open-world classification in dialog systems require models to detect open intents, while ensuring the quality of in-domain (ID) intent classification. In this work, we revisit methods that leverage distance-based statistics for unsupervised out-of-domain (OOD) detection. We show that despite their superior performance on threshold-independent metrics like AUROC on test-set, threshold values chosen based on the performance on a validation-set do not generalize well to the test-set, thus resulting in substantially lower performance on ID or OOD detection accuracy and F1-scores. Our analysis shows that this lack of generalizability can be successfully mitigated by setting aside a hold-out set from validation data for threshold selection (sometimes achieving relative gains as high as 100%). Extensive experiments on seven benchmark datasets show that this fix puts the performance of these methods at par with, or sometimes even better than, the current state-of-the-art OOD detection techniques.

pdf bib
Are Abstractive Summarization Models truly ‘Abstractive’? An Empirical Study to Compare the two Forms of Summarization
Vinayshekhar Bannihatti Kumar | Rashmi Gangadharaiah
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)

Automatic Text Summarization has seen a large paradigm shift from extractive methods to abstractive (or generation-based) methods in the last few years. This can be attributed to the availability of large autoregressive language models that have been shown to outperform extractive methods. In this work, we revisit extractive methods and study their performance against state of the art(SOTA) abstractive models. Through extensive studies, we notice that abstractive methods are not yet completely abstractive in their generated summaries. In addition to this finding, we propose an evaluation metric that could benefit the summarization research community to measure the degree of abstractiveness of a summary in comparison to their extractive counterparts. To confirm the generalizability of our findings, we conduct experiments on two summarization datasets using five powerful techniques in extractive and abstractive summarization and study their levels of abstraction.

pdf bib
PerKGQA: Question Answering over Personalized Knowledge Graphs
Ritam Dutt | Kasturi Bhattacharjee | Rashmi Gangadharaiah | Dan Roth | Carolyn Rose
Findings of the Association for Computational Linguistics: NAACL 2022

Previous studies on question answering over knowledge graphs have typically operated over a single knowledge graph (KG). This KG is assumed to be known a priori and is lever- aged similarly for all users’ queries during inference. However, such an assumption is not applicable to real-world settings, such as health- care, where one needs to handle queries of new users over unseen KGs during inference. Furthermore, privacy concerns and high computational costs render it infeasible to query the single KG that has information about all users while answering a specific user’s query. The above concerns motivate our question answer- ing setting over personalized knowledge graphs (PERKGQA) where each user has restricted access to their KG. We observe that current state-of-the-art KGQA methods that require learning prior node representations fare poorly. We propose two complementary approaches, PATHCBR and PATHRGCN for PERKGQA. The former is a simple non-parametric technique that employs case-based reasoning, while the latter is a parametric approach using graph neural networks. Our proposed methods circumvent learning prior representations, can generalize to unseen KGs, and outperform strong baselines on an academic and an internal dataset by 6.5% and 10.5%.

pdf bib
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Anastassia Loukina | Rashmi Gangadharaiah | Bonan Min
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

pdf bib
What Do Users Care About? Detecting Actionable Insights from User Feedback
Kasturi Bhattacharjee | Rashmi Gangadharaiah | Kathleen McKeown | Dan Roth
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

Users often leave feedback on a myriad of aspects of a product which, if leveraged successfully, can help yield useful insights that can lead to further improvements down the line. Detecting actionable insights can be challenging owing to large amounts of data as well as the absence of labels in real-world scenarios. In this work, we present an aggregation and graph-based ranking strategy for unsupervised detection of these insights from real-world, noisy, user-generated feedback. Our proposed approach significantly outperforms strong baselines on two real-world user feedback datasets and one academic dataset.

pdf bib
Towards Cross-Domain Transferability of Text Generation Models for Legal Text
Vinayshekhar Bannihatti Kumar | Kasturi Bhattacharjee | Rashmi Gangadharaiah
Proceedings of the Natural Legal Language Processing Workshop 2022

Legalese can often be filled with verbose domain-specific jargon which can make it challenging to understand and use for non-experts. Creating succinct summaries of legal documents often makes it easier for user comprehension. However, obtaining labeled data for every domain of legal text is challenging, which makes cross-domain transferability of text generation models for legal text, an important area of research. In this paper, we explore the ability of existing state-of-the-art T5 & BART-based summarization models to transfer across legal domains. We leverage publicly available datasets across four domains for this task, one of which is a new resource for summarizing privacy policies, that we curate and release for academic research. Our experiments demonstrate the low cross-domain transferability of these models, while also highlighting the benefits of combining different domains. Further, we compare the effectiveness of standard metrics for this task and illustrate the vast differences in their performance.

2021

pdf bib
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations
Chaitanya Shivade | Rashmi Gangadharaiah | Spandana Gella | Sandeep Konam | Shaoqing Yuan | Yi Zhang | Parminder Bhatia | Byron Wallace
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations

pdf bib
Domain and Task-Informed Sample Selection for Cross-Domain Target-based Sentiment Analysis
Kasturi Bhattacharjee | Rashmi Gangadharaiah | Smaranda Muresan
Proceedings of the 4th International Conference on Natural Language and Speech Processing (ICNLSP 2021)

2020

pdf bib
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations
Parminder Bhatia | Steven Lin | Rashmi Gangadharaiah | Byron Wallace | Izhak Shafran | Chaitanya Shivade | Nan Du | Mona Diab
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations

pdf bib
Recursive Template-based Frame Generation for Task Oriented Dialog
Rashmi Gangadharaiah | Balakrishnan Narayanaswamy
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user’s request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST). This information is typically represented as a semantic frame that captures the intent and slot-labels provided by the user. We first show that such a shallow representation is insufficient for complex dialog scenarios, because it does not capture the recursive nature inherent in many domains. We propose a recursive, hierarchical frame-based representation and show how to learn it from data. We formulate the frame generation task as a template-based tree decoding task, where the decoder recursively generates a template and then fills slot values into the template. We extend local tree-based loss functions with terms that provide global supervision and show how to optimize them end-to-end. We achieve a small improvement on the widely used ATIS dataset and a much larger improvement on a more complex dataset we describe here.

2019

pdf bib
Joint Multiple Intent Detection and Slot Labeling for Goal-Oriented Dialog
Rashmi Gangadharaiah | Balakrishnan Narayanaswamy
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Neural network models have recently gained traction for sentence-level intent classification and token-based slot-label identification. In many real-world scenarios, users have multiple intents in the same utterance, and a token-level slot label can belong to more than one intent. We investigate an attention-based neural network model that performs multi-label classification for identifying multiple intents and produces labels for both intents and slot-labels at the token-level. We show state-of-the-art performance for both intent detection and slot-label identification by comparing against strong, recently proposed models. Our model provides a small but statistically significant improvement of 0.2% on the predominantly single-intent ATIS public data set, and 55% intent accuracy improvement on an internal multi-intent dataset.

2018

pdf bib
What we need to learn if we want to do and not just talk
Rashmi Gangadharaiah | Balakrishnan Narayanaswamy | Charles Elkan
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

In task-oriented dialog, agents need to generate both fluent natural language responses and correct external actions like database queries and updates. Our paper makes the first attempt at evaluating state of the art models on a large real world task with human users. We show that methods that achieve state of the art performance on synthetic datasets, perform poorly in real world dialog tasks. We propose a hybrid model, where nearest neighbor is used to generate fluent responses and Seq2Seq type models ensure dialogue coherency and generate accurate external actions. The hybrid model on the customer support data achieves a 78% relative improvement in fluency, and a 200% improvement in accuracy of external calls.

2014

pdf bib
Learning to Re-rank for Interactive Problem Resolution and Query Refinement
Rashmi Gangadharaiah | Balakrishnan Narayanaswamy | Charles Elkan
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

2013

pdf bib
Semi-Supervised Answer Extraction from Discussion Forums
Rose Catherine | Rashmi Gangadharaiah | Karthik Visweswariah | Dinesh Raghu
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
Natural Language Query Refinement for Problem Resolution from Crowd-Sourced Semi-Structured Data
Rashmi Gangadharaiah | Balakrishnan Narayanaswamy
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
Hypothesis Refinement Using Agreement Constraints in Machine Translation
Ankur Gandhe | Rashmi Gangadharaiah
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf bib
Does Similarity Matter? The Case of Answer Extraction from Technical Discussion Forums
Rose Catherine | Amit Singh | Rashmi Gangadharaiah | Dinesh Raghu | Karthik Visweswariah
Proceedings of COLING 2012: Posters

2011

pdf bib
Handling verb phrase morphology in highly inflected Indian languages for Machine Translation
Ankur Gandhe | Rashmi Gangadharaiah | Karthik Visweswariah | Ananthakrishnan Ramanathan
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Reducing Asymmetry between language-pairs to Improve Alignment and Translation Quality
Rashmi Gangadharaiah
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf bib
Monolingual Distributional Profiles for Word Substitution in Machine Translation
Rashmi Gangadharaiah | Ralf D. Brown | Jaime Carbonell
Coling 2010: Posters

pdf bib
Automatic Determination of Number of clusters for creating Templates in Example-Based Machine Translation
Rashmi Gangadharaiah | Ralf Brown | Jaime Carbonell
Proceedings of the 14th Annual Conference of the European Association for Machine Translation

2009

pdf bib
Active Learning in Example-Based Machine Translation
Rashmi Gangadharaiah | Ralf D. Brown | Jaime Carbonell
Proceedings of the 17th Nordic Conference of Computational Linguistics (NODALIDA 2009)

2006

pdf bib
Spectral Clustering for Example Based Machine Translation
Rashmi Gangadharaiah | Ralf Brown | Jaime Carbonell
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers