An Encoder Attribution Analysis for Dense Passage Retriever in Open-Domain Question Answering

Minghan Li, Xueguang Ma, Jimmy Lin


Abstract
The bi-encoder design of dense passage retriever (DPR) is a key factor to its success in open-domain question answering (QA), yet it is unclear how DPR’s question encoder and passage encoder individually contributes to overall performance, which we refer to as the encoder attribution problem. The problem is important as it helps us identify the factors that affect individual encoders to further improve overall performance. In this paper, we formulate our analysis under a probabilistic framework called encoder marginalization, where we quantify the contribution of a single encoder by marginalizing other variables. First, we find that the passage encoder contributes more than the question encoder to in-domain retrieval accuracy. Second, we demonstrate how to find the affecting factors for each encoder, where we train DPR with different amounts of data and use encoder marginalization to analyze the results. We find that positive passage overlap and corpus coverage of training data have big impacts on the passage encoder, while the question encoder is mainly affected by training sample complexity under this setting. Based on this framework, we can devise data-efficient training regimes: for example, we manage to train a passage encoder on SQuAD using 60% less training data without loss of accuracy.
Anthology ID:
2022.trustnlp-1.1
Volume:
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
Month:
July
Year:
2022
Address:
Seattle, U.S.A.
Editors:
Apurv Verma, Yada Pruksachatkun, Kai-Wei Chang, Aram Galstyan, Jwala Dhamala, Yang Trista Cao
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/2022.trustnlp-1.1
DOI:
10.18653/v1/2022.trustnlp-1.1
Bibkey:
Cite (ACL):
Minghan Li, Xueguang Ma, and Jimmy Lin. 2022. An Encoder Attribution Analysis for Dense Passage Retriever in Open-Domain Question Answering. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 1–11, Seattle, U.S.A.. Association for Computational Linguistics.
Cite (Informal):
An Encoder Attribution Analysis for Dense Passage Retriever in Open-Domain Question Answering (Li et al., TrustNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.trustnlp-1.1.pdf
Video:
 https://aclanthology.org/2022.trustnlp-1.1.mp4
Data
Natural QuestionsSQuADTriviaQAWebQuestions