Jonathan Malmaud


2020

pdf bib
Bridging Information-Seeking Human Gaze and Machine Reading Comprehension
Jonathan Malmaud | Roger Levy | Yevgeni Berzak
Proceedings of the 24th Conference on Computational Natural Language Learning

In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension. To this end, we collect a new eye-tracking dataset with a large number of participants engaging in a multiple choice reading comprehension task. Our analysis of this data reveals increased fixation times over parts of the text that are most relevant for answering the question. Motivated by this finding, we propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior during reading comprehension. We demonstrate that this approach leads to performance gains on multiple choice question answering in English for a state-of-the-art reading comprehension model.

pdf bib
STARC: Structured Annotations for Reading Comprehension
Yevgeni Berzak | Jonathan Malmaud | Roger Levy
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE, is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.

2015

pdf bib
What’s Cookin’? Interpreting Cooking Videos using Text, Speech and Vision
Jonathan Malmaud | Jonathan Huang | Vivek Rathod | Nicholas Johnston | Andrew Rabinovich | Kevin Murphy
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Cooking with Semantics
Jonathan Malmaud | Earl Wagner | Nancy Chang | Kevin Murphy
Proceedings of the ACL 2014 Workshop on Semantic Parsing