Large-scale similarity search with Optimal Transport

Cléa Laouar, Yuki Takezawa, Makoto Yamada


Abstract
Wasserstein distance is a powerful tool for comparing probability distributions and is widely used for document classification and retrieval tasks in NLP. In particular, it is known as the word mover’s distance (WMD) in the NLP community. WMD exhibits excellent performance for various NLP tasks; however, one of its limitations is its computational cost and thus is not useful for large-scale distribution comparisons. In this study, we propose a simple and effective nearest neighbor search based on the Wasserstein distance. Specifically, we employ the L1 embedding method based on the tree-based Wasserstein approximation and subsequently used the nearest neighbor search to efficiently find the k-nearest neighbors. Through benchmark experiments, we demonstrate that the proposed approximation has comparable performance to the vanilla Wasserstein distance and can be computed three orders of magnitude faster than the vanilla Wasserstein distance.
Anthology ID:
2023.emnlp-main.730
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11920–11930
Language:
URL:
https://aclanthology.org/2023.emnlp-main.730
DOI:
10.18653/v1/2023.emnlp-main.730
Bibkey:
Cite (ACL):
Cléa Laouar, Yuki Takezawa, and Makoto Yamada. 2023. Large-scale similarity search with Optimal Transport. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11920–11930, Singapore. Association for Computational Linguistics.
Cite (Informal):
Large-scale similarity search with Optimal Transport (Laouar et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.730.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.730.mp4