Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning

Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, Ryan Cotterell (Editors)


Anthology ID:
2023.conll-babylm
Month:
December
Year:
2023
Address:
Singapore
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2023.conll-babylm
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2023.conll-babylm.pdf

pdf bib
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Alex Warstadt | Aaron Mueller | Leshem Choshen | Ethan Wilcox | Chengxu Zhuang | Juan Ciro | Rafael Mosquera | Bhargavi Paranjabe | Adina Williams | Tal Linzen | Ryan Cotterell

pdf bib
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt | Aaron Mueller | Leshem Choshen | Ethan Wilcox | Chengxu Zhuang | Juan Ciro | Rafael Mosquera | Bhargavi Paranjabe | Adina Williams | Tal Linzen | Ryan Cotterell

pdf bib
GPT-wee: How Small Can a Small Language Model Really Get?
Bastian Bunzeck | Sina Zarrieß

pdf bib
Tiny Language Models Enriched with Multimodal Knowledge from Multiplex Networks
Clayton Fields | Osama Natouf | Andrew McMains | Catherine Henry | Casey Kennington

pdf bib
Mini Minds: Exploring Bebeshka and Zlata Baby Models
Irina Proskurina | Guillaume Metzler | Julien Velcin

pdf bib
Grammar induction pretraining for language modeling in low resource contexts
Xuanda Chen | Eva Portelance

pdf bib
ChapGTP, ILLC’s Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
Jaap Jumelet | Michael Hanna | Marianne de Heer Kloots | Anna Langedijk | Charlotte Pouw | Oskar van der Wal

pdf bib
Penn & BGU BabyBERTa+ for Strict-Small BabyLM Challenge
Yahan Yang | Elior Sulem | Insup Lee | Dan Roth

pdf bib
Too Much Information: Keeping Training Simple for BabyLMs
Lukas Edman | Lisa Bylinina

pdf bib
Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior?
Aryaman Chobey | Oliver Smith | Anzi Wang | Grusha Prasad

pdf bib
CLIMB – Curriculum Learning for Infant-inspired Model Building
Richard Diehl Martinez | Hope McGovern | Zebulon Goriely | Christopher Davis | Andrew Caines | Paula Buttery | Lisa Beinborn

pdf bib
Acquiring Linguistic Knowledge from Multimodal Input
Theodor Amariucai | Alexander Scott Warstadt

pdf bib
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures
Julius Steuer | Marius Mosbach | Dietrich Klakow

pdf bib
Baby’s CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models
Zheyu Zhang | Han Yang | Bolei Ma | David Rügamer | Ercong Nie

pdf bib
ToddlerBERTa: Exploiting BabyBERTa for Grammar Learning and Language Understanding
Ömer Veysel Çağatan

pdf bib
CogMemLM: Human-Like Memory Mechanisms Improve Performance and Cognitive Plausibility of LLMs
Lukas Thoma | Ivonne Weyers | Erion Çano | Stefan Schweter | Jutta L Mueller | Benjamin Roth

pdf bib
BabyStories: Can Reinforcement Learning Teach Baby Language Models to Write Better Stories?
Xingmeng Zhao | Tongnian Wang | Sheri Osborn | Anthony Rios

pdf bib
Byte-ranked Curriculum Learning for BabyLM Strict-small Shared Task 2023
Justin DeBenedetto

pdf bib
McGill BabyLM Shared Task Submission: The Effects of Data Formatting and Structural Biases
Ziling Cheng | Rahul Aralikatte | Ian Porada | Cesare Spinoso-Di Piano | Jackie CK Cheung

pdf bib
Mean BERTs make erratic language teachers: the effectiveness of latent bootstrapping in low-resource settings
David Samuel

pdf bib
Not all layers are equally as important: Every Layer Counts BERT
Lucas Georges Gabriel Charpentier | David Samuel

pdf bib
WhisBERT: Multimodal Text-Audio Language Modeling on 100M Words
Lukas Wolf | Klemen Kotar | Greta Tuckute | Eghbal Hosseini | Tamar I. Regev | Ethan Gotlieb Wilcox | Alexander Scott Warstadt

pdf bib
A surprisal oracle for active curriculum language modeling
Xudong Hong | Sharid Loáiciga | Asad Sayeed

pdf bib
Mmi01 at The BabyLM Challenge: Linguistically Motivated Curriculum Learning for Pretraining in Low-Resource Settings
Maggie Mi

pdf bib
Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty
Inar Timiryasov | Jean-Loup Tastet

pdf bib
BabyLM Challenge: Curriculum learning based on sentence complexity approximating language acquisition
Miyu Oba | Akari Haga | Akiyo Fukatsu | Yohei Oseki

pdf bib
Better Together: Jointly Using Masked Latent Semantic Modeling and Masked Language Modeling for Sample Efficient Pre-training
Gábor Berend

pdf bib
Lil-Bevo: Explorations of Strategies for Training Language Models in More Humanlike Ways
Venkata S Govindarajan | Juan Diego Rodriguez | Kaj Bostrom | Kyle Mahowald

pdf bib
Towards more Human-like Language Models based on Contextualizer Pretraining Strategy
Chenghao Xiao | G Thomas Hudson | Noura Al Moubayed

pdf bib
Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building
Omar Momen | David Arps | Laura Kallmeyer

pdf bib
Pre-training LLMs using human-like development data corpus
Khushi Bhardwaj | Raj Sanjay Shah | Sashank Varma

pdf bib
On the effect of curriculum learning with developmental data for grammar acquisition
Mattia Opper | J. Morrison | N. Siddharth

pdf bib
Optimizing GPT-2 Pretraining on BabyLM Corpus with Difficulty-based Sentence Reordering
Nasim Borazjanizadeh