Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering

Hai Ye, Qizhe Xie, Hwee Tou Ng

Main: Question Answering Main-oral Paper

Session 1: Question Answering (Oral)
Conference Room: Metropolitan West
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Keywords: reading comprehension
TLDR: In this work, we study multi-source test-time model adaptation from user feedback, where $K$ distinct models are established for adaptation. To allow efficient adaptation, we cast the problem as a stochastic decision-making process, aiming to determine the best adapted model after adaptation. We dis...
You can open the #paper-P5482 channel in a separate window.
Abstract: In this work, we study multi-source test-time model adaptation from user feedback, where $K$ distinct models are established for adaptation. To allow efficient adaptation, we cast the problem as a stochastic decision-making process, aiming to determine the best adapted model after adaptation. We discuss two frameworks: multi-armed bandit learning and multi-armed dueling bandits. Compared to multi-armed bandit learning, the dueling framework allows pairwise collaboration among $K$ models, which is solved by a novel method named Co-UCB proposed in this work. Experiments on six datasets of extractive question answering (QA) show that the dueling framework using Co-UCB is more effective than other strong baselines for our studied problem.