Edward Gibson


2023

pdf bib
A fine-grained comparison of pragmatic language understanding in humans and language models
Jennifer Hu | Sammy Floyd | Olessia Jouravlev | Evelina Fedorenko | Edward Gibson
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Pragmatics and non-literal language understanding are essential to human communication, and present a long-standing challenge for artificial language models. We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor literal interpretations over heuristic-based distractors. We also find preliminary evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that pragmatic behaviors can emerge in models without explicitly constructed representations of mental states. However, models tend to struggle with phenomena relying on social expectation violations.

pdf bib
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics
Yuhan Zhang | Edward Gibson | Forrest Davis
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)

Language models (LMs) have been argued to overlap substantially with human beings in grammaticality judgment tasks. But when humans systematically make errors in language processing, should we expect LMs to behave like cognitive models of language and mimic human behavior? We answer this question by investigating LMs’ more subtle judgments associated with “language illusions” – sentences that are vague in meaning, implausible, or ungrammatical but receive unexpectedly high acceptability judgments by humans. We looked at three illusions: the comparative illusion (e.g. “More people have been to Russia than I have”), the depth-charge illusion (e.g. “No head injury is too trivial to be ignored”), and the negative polarity item (NPI) illusion (e.g. “The hunter who no villager believed to be trustworthy will ever shoot a bear”). We found that probabilities represented by LMs were more likely to align with human judgments of being “tricked” by the NPI illusion which examines a structural dependency, compared to the comparative and the depth-charge illusions which require sophisticated semantic understanding. No single LM or metric yielded results that are entirely consistent with human behavior. Ultimately, we show that LMs are limited both in their construal as cognitive models of human language processing and in their capacity to recognize nuanced but critical information in complicated language materials.

2019

pdf bib
Syntactic dependencies correspond to word pairs with high mutual information
Richard Futrell | Peng Qian | Edward Gibson | Evelina Fedorenko | Idan Blank
Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)

2018

pdf bib
The Natural Stories Corpus
Richard Futrell | Edward Gibson | Harry J. Tily | Idan Blank | Anastasia Vishnevetsky | Steven Piantadosi | Evelina Fedorenko
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
Memory access during incremental sentence processing causes reading time latency
Cory Shain | Marten van Schijndel | Richard Futrell | Edward Gibson | William Schuler
Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli. Our study addresses these concerns by comparing several implementations of prominent sentence processing theories on an exploratory corpus and evaluating the most successful of these on a confirmatory corpus, using a new self-paced reading corpus of seemingly natural narratives constructed to contain an unusually high proportion of memory-intensive constructions. We show highly significant and complementary broad-coverage latency effects both for predictors based on the Dependency Locality Theory and for predictors based on a left-corner parsing model of sentence processing. Our results indicate that memory access during sentence processing does take time, but suggest that stimuli requiring many memory access events may be necessary in order to observe the effect.

2015

pdf bib
Experiments with Generative Models for Dependency Tree Linearization
Richard Futrell | Edward Gibson
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Quantifying Word Order Freedom in Dependency Corpora
Richard Futrell | Kyle Mahowald | Edward Gibson
Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015)

2014

pdf bib
Language for Communication: Language as Rational Inference
Edward Gibson
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf bib
Arguments and Modifiers from the Learner’s Perspective
Leon Bergen | Edward Gibson | Timothy J. O’Donnell
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2007

pdf bib
ILR-Based MT Comprehension Test with Multi-Level Questions
Douglas Jones | Martha Herzog | Hussny Ibrahim | Arvind Jairam | Wade Shen | Edward Gibson | Michael Emonts
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2005

pdf bib
Representing Discourse Coherence: A Corpus-Based Study
Florian Wolf | Edward Gibson
Computational Linguistics, Volume 31, Number 2, June 2005

2004

pdf bib
Representing discourse coherence: A corpus-based analysis
Florian Wolf | Edward Gibson
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance
Florian Wolf | Edward Gibson
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

pdf bib
Paragraph-, Word- and Coherence-Based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance
Florian Wolf | Edward Gibson
Text Summarization Branches Out

1990

pdf bib
Memory Capacity and Sentence Processing
Edward Gibson
28th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Computational Theory of Processing Overload and Garden-Path Effects
Edward Gibson
COLING 1990 Volume 3: Papers presented to the 13th International Conference on Computational Linguistics

1989

pdf bib
Parsing with Principles: Predicting a Phrasal Node Before Its Head Appears
Edward Gibson
Proceedings of the First International Workshop on Parsing Technologies