Contrastive Deterministic Autoencoders For Language Modeling

Amur Ghose, Pascal Poupart


Abstract
Variational autoencoders (VAEs) are a popular family of generative models with wide applicability. Training VAEs, especially for text, often runs into the issue of posterior collapse, resulting in loss of representation quality. Deterministic autoencoders avoid this issue, and have been explored particularly well for images. It is however unclear how to best modify a deterministic model designed for images into a successful one for text. We show that with suitable adaptations, we can significantly improve on batch-normed VAEs (BN-VAEs), a strong benchmark for language modeling with VAEs, by replacing them with analogous deterministic models. We employ techniques from contrastive learning to control the entropy of the aggregate posterior of these models to make it Gaussian. The resulting models skip reparametrization steps in VAE modeling and avoid posterior collapse, while outperforming a broad range of VAE models on text generation and downstream tasks from representations. These improvements are shown to be consistent across both LSTM and Transformer-based VAE architectures. Appropriate comparisons to BERT/GPT-2 based results are also included. We also qualitatively examine the latent space through interpolation to supplement the quantitative aspects of the model.
Anthology ID:
2023.findings-emnlp.567
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8458–8476
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.567
DOI:
10.18653/v1/2023.findings-emnlp.567
Bibkey:
Cite (ACL):
Amur Ghose and Pascal Poupart. 2023. Contrastive Deterministic Autoencoders For Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8458–8476, Singapore. Association for Computational Linguistics.
Cite (Informal):
Contrastive Deterministic Autoencoders For Language Modeling (Ghose & Poupart, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.567.pdf