Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering

Hai Ye, Qizhe Xie, Hwee Tou Ng


Abstract
In this work, we study multi-source test-time model adaptation from user feedback, where K distinct models are established for adaptation. To allow efficient adaptation, we cast the problem as a stochastic decision-making process, aiming to determine the best adapted model after adaptation. We discuss two frameworks: multi-armed bandit learning and multi-armed dueling bandits. Compared to multi-armed bandit learning, the dueling framework allows pairwise collaboration among K models, which is solved by a novel method named Co-UCB proposed in this work. Experiments on six datasets of extractive question answering (QA) show that the dueling framework using Co-UCB is more effective than other strong baselines for our studied problem.
Anthology ID:
2023.acl-long.537
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9647–9660
Language:
URL:
https://aclanthology.org/2023.acl-long.537
DOI:
10.18653/v1/2023.acl-long.537
Bibkey:
Cite (ACL):
Hai Ye, Qizhe Xie, and Hwee Tou Ng. 2023. Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9647–9660, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering (Ye et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.537.pdf
Video:
 https://aclanthology.org/2023.acl-long.537.mp4