Alla Rozovskaya


2024

pdf bib
Multi-Reference Benchmarks for Russian Grammatical Error Correction
Frank Palma Gomez | Alla Rozovskaya
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

This paper presents multi-reference benchmarks for the Grammatical Error Correction (GEC) of Russian, based on two existing single-reference datasets, for a total of 7,444 learner sentences from a variety of first language backgrounds. Each sentence is corrected independently by two new raters, and their corrections are reviewed by a senior annotator, resulting in a total of three references per sentence. Analysis of the annotations reveals that the new raters tend to make more changes, compared to the original raters, especially at the lexical level. We conduct experiments with two popular GEC approaches and show competitive performance on the original datasets and the new benchmarks. We also compare system scores as evaluated against individual annotators and discuss the effect of using multiple references overall and on specific error types. We find that using the union of the references increases system scores by more than 10 points and decreases the gap between system and human performance, thereby providing a more realistic evaluation of GEC system performance, although the effect is not the same across the error types. The annotations are available for research.

2023

pdf bib
A Low-Resource Approach to the Grammatical Error Correction of Ukrainian
Frank Palma Gomez | Alla Rozovskaya | Dan Roth
Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)

We present our system that participated in the shared task on the grammatical error correction of Ukrainian. We have implemented two approaches that make use of large pre-trained language models and synthetic data, that have been used for error correction of English as well as low-resource languages. The first approach is based on fine-tuning a large multilingual language model (mT5) in two stages: first, on synthetic data, and then on gold data. The second approach trains a (smaller) seq2seq Transformer model pre-trained on synthetic data and fine-tuned on gold data. Our mT5-based model scored first in “GEC only” track, and a very close second in the “GEC+Fluency” track. Our two key innovations are (1) finetuning in stages, first on synthetic, and then on gold data; and (2) a high-quality corruption method based on roundtrip machine translation to complement existing noisification approaches.

pdf bib
Using Neural Machine Translation for Generating Diverse Challenging Exercises for Language Learner
Frank Palma Gomez | Subhadarshi Panda | Michael Flor | Alla Rozovskaya
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose a novel approach to automatically generate distractors for cloze exercises for English language learners, using round-trip neural machine translation. A carrier sentence is translated from English into another (pivot) language and back, and distractors are produced by aligning the original sentence with its round-trip translation. We make use of 16 linguistically-diverse pivots and generate hundreds of translation hypotheses in each direction. We show that using hundreds of translations allows us to generate a rich set of challenging distractors. Moreover, we find that typologically unrelated language pivots contribute more diverse candidate distractors, compared to language pivots that are closely related. We further evaluate the use of machine translation systems of varying quality and find that better quality MT systems produce more challenging distractors. Finally, we conduct a study with language learners, demonstrating that the automatically generated distractors are of the same difficulty as the gold distractors produced by human experts.

2022

pdf bib
Automatic Classification of Russian Learner Errors
Alla Rozovskaya
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Grammatical Error Correction systems are typically evaluated overall, without taking into consideration performance on individual error types because system output is not annotated with respect to error type. We introduce a tool that automatically classifies errors in Russian learner texts. The tool takes an edit pair consisting of the original token(s) and the corresponding replacement and provides a grammatical error category. Manual evaluation of the output reveals that in more than 93% of cases the error categories are judged as correct or acceptable. We apply the tool to carry out a fine-grained evaluation on the performance of two error correction systems for Russian.

pdf bib
Automatic Generation of Distractors for Fill-in-the-Blank Exercises with Round-Trip Neural Machine Translation
Subhadarshi Panda | Frank Palma Gomez | Michael Flor | Alla Rozovskaya
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

In a fill-in-the-blank exercise, a student is presented with a carrier sentence with one word hidden, and a multiple-choice list that includes the correct answer and several inappropriate options, called distractors. We propose to automatically generate distractors using round-trip neural machine translation: the carrier sentence is translated from English into another (pivot) language and back, and distractors are produced by aligning the original sentence and its round-trip translation. We show that using hundreds of translations for a given sentence allows us to generate a rich set of challenging distractors. Further, using multiple pivot languages produces a diverse set of candidates. The distractors are evaluated against a real corpus of cloze exercises and checked manually for validity. We demonstrate that the proposed method significantly outperforms two strong baselines.

2021

pdf bib
How Good (really) are Grammatical Error Correction Systems?
Alla Rozovskaya | Dan Roth
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Standard evaluations of Grammatical Error Correction (GEC) systems make use of a fixed reference text generated relative to the original text; they show, even when using multiple references, that we have a long way to go. This analysis paper studies the performance of GEC systems relative to closest-gold – a gold reference text created relative to the output of a system. Surprisingly, we show that the real performance is 20-40 points better than standard evaluations show. Moreover, the performance remains high even when considering any of the top-10 hypotheses produced by a system. Importantly, the type of mistakes corrected by lower-ranked hypotheses differs in interesting ways from the top one, providing an opportunity to focus on a range of errors – local spelling and grammar edits vs. more complex lexical improvements. Our study shows these results in English and Russian, and thus provides a preliminary proposal for a more realistic evaluation of GEC systems.

pdf bib
New Dataset and Strong Baselines for the Grammatical Error Correction of Russian
Viet Anh Trinh | Alla Rozovskaya
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Spelling Correction for Russian: A Comparative Study of Datasets and Methods
Alla Rozovskaya
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

We develop a minimally-supervised model for spelling correction and evaluate its performance on three datasets annotated for spelling errors in Russian. The first corpus is a dataset of Russian social media data that was recently used in a shared task on Russian spelling correction. The other two corpora contain texts produced by learners of Russian as a foreign language. Evaluating on three diverse datasets allows for a cross-corpus comparison. We compare the performance of the minimally-supervised model to two baseline models that do not use context for candidate re-ranking, as well as to a character-level statistical machine translation system with context-based re-ranking. We show that the minimally-supervised model outperforms all of the other models. We also present an analysis of the spelling errors and discuss the difficulty of the task compared to the spelling correction problem in English.

2020

pdf bib
A Comparative Study of Synthetic Data Generation Methods for Grammatical Error Correction
Max White | Alla Rozovskaya
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications

Grammatical Error Correction (GEC) is concerned with correcting grammatical errors in written text. Current GEC systems, namely those leveraging statistical and neural machine translation, require large quantities of annotated training data, which can be expensive or impractical to obtain. This research compares techniques for generating synthetic data utilized by the two highest scoring submissions to the restricted and low-resource tracks in the BEA-2019 Shared Task on Grammatical Error Correction.

2019

pdf bib
Grammar Error Correction in Morphologically Rich Languages: The Case of Russian
Alla Rozovskaya | Dan Roth
Transactions of the Association for Computational Linguistics, Volume 7

Until now, most of the research in grammar error correction focused on English, and the problem has hardly been explored for other languages. We address the task of correcting writing mistakes in morphologically rich languages, with a focus on Russian. We present a corrected and error-tagged corpus of Russian learner writing and develop models that make use of existing state-of-the-art methods that have been well studied for English. Although impressive results have recently been achieved for grammar error correction of non-native English writing, these results are limited to domains where plentiful training data are available. Because annotation is extremely costly, these approaches are not suitable for the majority of domains and languages. We thus focus on methods that use “minimal supervision”; that is, those that do not rely on large amounts of annotated training data, and show how existing minimal-supervision approaches extend to a highly inflectional language such as Russian. The results demonstrate that these methods are particularly useful for correcting mistakes in grammatical phenomena that involve rich morphology.

pdf bib
A Benchmark Corpus of English Misspellings and a Minimally-supervised Model for Spelling Correction
Michael Flor | Michael Fried | Alla Rozovskaya
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

Spelling correction has attracted a lot of attention in the NLP community. However, models have been usually evaluated on artificiallycreated or proprietary corpora. A publiclyavailable corpus of authentic misspellings, annotated in context, is still lacking. To address this, we present and release an annotated data set of 6,121 spelling errors in context, based on a corpus of essays written by English language learners. We also develop a minimallysupervised context-aware approach to spelling correction. It achieves strong results on our data: 88.12% accuracy. This approach can also train with a minimal amount of annotated data (performance reduced by less than 1%). Furthermore, this approach allows easy portability to new domains. We evaluate our model on data from a medical domain and demonstrate that it rivals the performance of a model trained and tuned on in-domain data.

2018

pdf bib
Predicting Discharge Disposition Using Patient Complaint Notes in Electronic Medical Records
Mohamad Salimi | Alla Rozovskaya
Proceedings of the BioNLP 2018 workshop

Overcrowding in emergency rooms is a major challenge faced by hospitals across the United States. Overcrowding can result in longer wait times, which, in turn, has been shown to adversely affect patient satisfaction, clinical outcomes, and procedure reimbursements. This paper presents research that aims to automatically predict discharge disposition of patients who received medical treatment in an emergency department. We make use of a corpus that consists of notes containing patient complaints, diagnosis information, and disposition, entered by health care providers. We use this corpus to develop a model that uses the complaint and diagnosis information to predict patient disposition. We show that the proposed model substantially outperforms the baseline of predicting the most common disposition type. The long-term goal of this research is to build a model that can be implemented as a real-time service in an application to predict disposition as patients arrive.

2017

pdf bib
Adapting to Learner Errors with Minimal Supervision
Alla Rozovskaya | Dan Roth | Mark Sammons
Computational Linguistics, Volume 43, Issue 4 - December 2017

This article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using minimal supervision. Existing training approaches present a trade-off between large amounts of cheap data offered by the native-trained models and additional knowledge of learner error patterns provided by the more expensive method of training on annotated learner data. We propose a novel training approach that draws on the strengths offered by the two standard training paradigms—of training either on native or on annotated learner data—and that outperforms both of these standard methods. Using the key observation that parameters relating to error regularities exhibited by non-native writers are relatively simple, we develop models that can incorporate knowledge about error regularities based on a small annotated sample but that are otherwise trained on native English data. The key contribution of this article is the introduction and analysis of two methods for adapting the learned models to error patterns of non-native writers; one method that applies to generative classifiers and a second that applies to discriminative classifiers. Both methods demonstrated state-of-the-art performance in several text correction competitions. In particular, the Illinois system that implements these methods ranked at the top in two recent CoNLL shared tasks on error correction.1 We conduct further evaluation of the proposed approaches studying the effect of using error data from speakers of the same native language, languages that are closely related linguistically, and unrelated languages.

2016

pdf bib
The Virginia Tech System at CoNLL-2016 Shared Task on Shallow Discourse Parsing
Prashant Chandrasekar | Xuan Zhang | Saurabh Chakravarty | Arijit Ray | John Krulick | Alla Rozovskaya
Proceedings of the CoNLL-16 shared task

pdf bib
Grammatical Error Correction: Machine Translation and Classifiers
Alla Rozovskaya | Dan Roth
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Correction Annotation for Non-Native Arabic Texts: Guidelines and Corpus
Wajdi Zaghouani | Nizar Habash | Houda Bouamor | Alla Rozovskaya | Behrang Mohit | Abeer Heider | Kemal Oflazer
Proceedings of the 9th Linguistic Annotation Workshop

pdf bib
The Second QALB Shared Task on Automatic Text Correction for Arabic
Alla Rozovskaya | Houda Bouamor | Nizar Habash | Wajdi Zaghouani | Ossama Obeid | Behrang Mohit
Proceedings of the Second Workshop on Arabic Natural Language Processing

2014

pdf bib
Correcting Grammatical Verb Errors
Alla Rozovskaya | Dan Roth | Vivek Srikumar
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Large Scale Arabic Error Annotation: Guidelines and Framework
Wajdi Zaghouani | Behrang Mohit | Nizar Habash | Ossama Obeid | Nadi Tomeh | Alla Rozovskaya | Noura Farra | Sarah Alkuhlani | Kemal Oflazer
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We present annotation guidelines and a web-based annotation framework developed as part of an effort to create a manually annotated Arabic corpus of errors and corrections for various text types. Such a corpus will be invaluable for developing Arabic error correction tools, both for training models and as a gold standard for evaluating error correction algorithms. We summarize the guidelines we created. We also describe issues encountered during the training of the annotators, as well as problems that are specific to the Arabic language that arose during the annotation process. Finally, we present the annotation tool that was developed as part of this project, the annotation pipeline, and the quality of the resulting annotations.

pdf bib
The Illinois-Columbia System in the CoNLL-2014 Shared Task
Alla Rozovskaya | Kai-Wei Chang | Mark Sammons | Dan Roth | Nizar Habash
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
The First QALB Shared Task on Automatic Text Correction for Arabic
Behrang Mohit | Alla Rozovskaya | Nizar Habash | Wajdi Zaghouani | Ossama Obeid
Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)

pdf bib
The Columbia System in the QALB-2014 Shared Task on Arabic Error Correction
Alla Rozovskaya | Nizar Habash | Ramy Eskander | Noura Farra | Wael Salloum
Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)

pdf bib
Generalized Character-Level Spelling Error Correction
Noura Farra | Nadi Tomeh | Alla Rozovskaya | Nizar Habash
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Building a State-of-the-Art Grammatical Error Correction System
Alla Rozovskaya | Dan Roth
Transactions of the Association for Computational Linguistics, Volume 2

This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.

2013

pdf bib
Joint Learning and Inference for Grammatical Error Correction
Alla Rozovskaya | Dan Roth
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
The University of Illinois System in the CoNLL-2013 Shared Task
Alla Rozovskaya | Kai-Wei Chang | Mark Sammons | Dan Roth
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

2012

pdf bib
The UI System in the HOO 2012 Shared Task on Error Correction
Alla Rozovskaya | Mark Sammons | Dan Roth
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Illinois-Coref: The UI System in the CoNLL-2012 Shared Task
Kai-Wei Chang | Rajhans Samdani | Alla Rozovskaya | Mark Sammons | Dan Roth
Joint Conference on EMNLP and CoNLL - Shared Task

2011

pdf bib
Algorithm Selection and Model Adaptation for ESL Correction Tasks
Alla Rozovskaya | Dan Roth
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
They Can Help: Using Crowdsourcing to Improve the Evaluation of Grammatical Error Detection Systems
Nitin Madnani | Martin Chodorow | Joel Tetreault | Alla Rozovskaya
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Inference Protocols for Coreference Resolution
Kai-Wei Chang | Rajhans Samdani | Alla Rozovskaya | Nick Rizzolo | Mark Sammons | Dan Roth
Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
University of Illinois System in HOO Text Correction Shared Task
Alla Rozovskaya | Mark Sammons | Joshua Gioja | Dan Roth
Proceedings of the 13th European Workshop on Natural Language Generation

2010

pdf bib
Annotating ESL Errors: Challenges and Rewards
Alla Rozovskaya | Dan Roth
Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Training Paradigms for Correcting Errors in Grammar and Usage
Alla Rozovskaya | Dan Roth
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Generating Confusion Sets for Context-Sensitive Error Correction
Alla Rozovskaya | Dan Roth
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
Using DEDICOM for Completely Unsupervised Part-of-Speech Tagging
Peter Chew | Brett Bader | Alla Rozovskaya
Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics

pdf bib
Identifying Semantic Relations in Context: Near-misses and Overlaps
Alla Rozovskaya | Roxana Girju
Proceedings of the International Conference RANLP-2009

2007

pdf bib
Multilingual Word Sense Discrimination: A Comparative Cross-Linguistic Study
Alla Rozovskaya | Richard Sproat
Proceedings of the Workshop on Balto-Slavonic Natural Language Processing

pdf bib
UIUC: A Knowledge-rich Approach to Identifying Semantic Relations between Nominals
Brandon Beamer | Suma Bhat | Brant Chee | Andrew Fister | Alla Rozovskaya | Roxana Girju
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf bib
Challenges in Processing Colloquial Arabic
Alla Rozovskaya | Richard Sproat | Elabbas Benmamoun
Proceedings of the International Conference on the Challenge of Arabic for NLP/MT

Processing of Colloquial Arabic is a relatively new area of research, and a number of interesting challenges pertaining to spoken Arabic dialects arise. On the one hand, a whole continuum of Arabic dialects exists, with linguistic differences on phonological, morphological, syntactic, and lexical levels. On the other hand, there are inter-dialectal similarities that need be explored. Furthermore, due to scarcity of dialect-specific linguistic resources and availability of a wide range of resources for Modern Standard Arabic (MSA), it is desirable to explore the possibility of exploiting MSA tools when working on dialects. This paper describes challenges in processing of Colloquial Arabic in the context of language modeling for Automatic Speech Recognition. Using data from Egyptian Colloquial Arabic and MSA, we investigate the question of improving language modeling of Egyptian Arabic with MSA data and resources. As part of the project, we address the problem of linguistic variation between Egyptian Arabic and MSA. To account for differences between MSA and Colloquial Arabic, we experiment with the following techniques of data transformation: morphological simplification (stemming), lexical transductions, and syntactic transformations. While the best performing model remains the one built using only dialectal data, these techniques allow us to obtain an improvement over the baseline MSA model. More specifically, while the effect on perplexity of syntactic transformations is not very significant, stemming of the training and testing data improves the baseline perplexity of the MSA model trained on words by 51%, and lexical transductions yield an 82% perplexity reduction. Although the focus of the present work is on language modeling, we believe the findings of the study will be useful for researchers involved in other areas of processing Arabic dialects, such as parsing and machine translation.