A Measure-Theoretic Characterization of Tight Language Models

Li Du, Lucas Torroba Hennigen, Tiago Pimentel, Clara Meister, Jason Eisner, Ryan Cotterell


Abstract
Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can “leak” onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
Anthology ID:
2023.acl-long.543
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9744–9770
Language:
URL:
https://aclanthology.org/2023.acl-long.543
DOI:
10.18653/v1/2023.acl-long.543
Bibkey:
Cite (ACL):
Li Du, Lucas Torroba Hennigen, Tiago Pimentel, Clara Meister, Jason Eisner, and Ryan Cotterell. 2023. A Measure-Theoretic Characterization of Tight Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9744–9770, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
A Measure-Theoretic Characterization of Tight Language Models (Du et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.543.pdf
Video:
 https://aclanthology.org/2023.acl-long.543.mp4