Aditya Bhargava


2023

pdf bib
Decomposed scoring of CCG dependencies
Aditya Bhargava | Gerald Penn
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

In statistical parsing with CCG, the standard evaluation method is based on predicate-argument structure and evaluates dependencies labelled in part by lexical categories. When a predicate has multiple argument slots that can be filled, the same lexical category is used for the label of multiple dependencies. In this paper, we show that this evaluation can result in disproportionate penalization of supertagging errors and obfuscate the truly erroneous dependencies. Enabled by the compositional nature of CCG lexical categories, we propose *decomposed scoring* based on subcategorial labels to address this. To evaluate our scoring method, we engage fellow categorial grammar researchers in two English-language judgement tasks: (1) directly ranking the outputs of the standard and experimental scoring methods; and (2) determining which of two sentences has the better parse in cases where the two scoring methods disagree on their ranks. Overall, the judges prefer decomposed scoring in each task; but there is substantial disagreement among the judges in 24% of the given cases, pointing to potential issues with parser evaluations in general.

2021

pdf bib
Proof Net Structure for Neural Lambek Categorial Parsing
Aditya Bhargava | Gerald Penn
Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)

In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as *proof nets* is applicable. Our parser incorporates proof net structure and constraints into a system based on self-attention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.

2020

pdf bib
Supertagging with CCG primitives
Aditya Bhargava | Gerald Penn
Proceedings of the 5th Workshop on Representation Learning for NLP

In CCG and other highly lexicalized grammars, supertagging a sentence’s words with their lexical categories is a critical step for efficient parsing. Because of the high degree of lexicalization in these grammars, the lexical categories can be very complex. Existing approaches to supervised CCG supertagging treat the categories as atomic units, even when the categories are not simple; when they encounter words with categories unseen during training, their guesses are accordingly unsophisticated. In this paper, we make use of the primitives and operators that constitute the lexical categories of categorial grammars. Instead of opaque labels, we treat lexical categories themselves as linear sequences. We present an LSTM-based model that replaces standard word-level classification with prediction of a sequence of primitives, similarly to LSTM decoders. Our model obtains state-of-the-art word accuracy for single-task English CCG supertagging, increases parser coverage and F1, and is able to produce novel categories. Analysis shows a synergistic effect between this decomposed view and incorporation of prediction history.

2012

pdf bib
Leveraging supplemental representations for sequential transduction
Aditya Bhargava | Grzegorz Kondrak
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf bib
How do you pronounce your name? Improving G2P with transliterations
Aditya Bhargava | Grzegorz Kondrak
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Leveraging Transliterations from Multiple Languages
Aditya Bhargava | Bradley Hauer | Grzegorz Kondrak
Proceedings of the 3rd Named Entities Workshop (NEWS 2011)

2010

pdf bib
Transliteration Generation and Mining with Limited Training Resources
Sittichai Jiampojamarn | Kenneth Dwyer | Shane Bergsma | Aditya Bhargava | Qing Dou | Mi-Young Kim | Grzegorz Kondrak
Proceedings of the 2010 Named Entities Workshop

pdf bib
Language identification of names with SVMs
Aditya Bhargava | Grzegorz Kondrak
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Predicting the Semantic Compositionality of Prefix Verbs
Shane Bergsma | Aditya Bhargava | Hua He | Grzegorz Kondrak
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
DirecTL: a Language Independent Approach to Transliteration
Sittichai Jiampojamarn | Aditya Bhargava | Qing Dou | Kenneth Dwyer | Grzegorz Kondrak
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)

pdf bib
Multiple Word Alignment with Profile Hidden Markov Models
Aditya Bhargava | Grzegorz Kondrak
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium