To Burst or Not to Burst: Generating and Quantifying Improbable Text

Kuleen Sasse, Efsun Sarioglu Kayi, Samuel Barham, Edward Staley


Abstract
While large language models (LLMs) are extremely capable at text generation, their outputs are still distinguishable from human-authored text. We explore this separation across many metrics over text, many sampling techniques, many types of text data, and across two popular LLMs, LLaMA and Vicuna. Along the way, we introduce a new metric, recoverability, to highlight differences between human and machine text; and we propose a new sampling technique, burst sampling, designed to close this gap. We find that LLaMA and Vicuna have distinct distributions under many of the metrics, and that this influences our results: Recoverability separates real from fake text better than any other metric when using LLaMA. When using Vicuna, burst sampling produces text which is distributionally closer to real text compared to other sampling techniques.
Anthology ID:
2023.gem-1.24
Volume:
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, Hooman Sedghamiz
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
289–309
Language:
URL:
https://aclanthology.org/2023.gem-1.24
DOI:
Bibkey:
Cite (ACL):
Kuleen Sasse, Efsun Sarioglu Kayi, Samuel Barham, and Edward Staley. 2023. To Burst or Not to Burst: Generating and Quantifying Improbable Text. In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 289–309, Singapore. Association for Computational Linguistics.
Cite (Informal):
To Burst or Not to Burst: Generating and Quantifying Improbable Text (Sasse et al., GEM-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.gem-1.24.pdf