One Cannot Stand for Everyone! Leveraging Multiple User Simulators\\ to train Task-oriented Dialogue Systems
Yajiao LIU, Xin Jiang, Yichun Yin, Yasheng Wang, Fei Mi, Qun Liu, Xiang Wan, Benyou Wang
Main: Dialogue and Interactive Systems Main-poster Paper
Session 1: Dialogue and Interactive Systems (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Keywords:
task-oriented
Languages:
chinese
TLDR:
User simulators are agents designed to imitate human users; recent advances have found that Task-oriented Dialogue (ToD) systems optimized toward a user simulator could better satisfy the need of human users.
However, this might result in a sub-optimal ToD system if it is tailored to only one \text...
You can open the
#paper-P4730
channel in a separate window.
Abstract:
User simulators are agents designed to imitate human users; recent advances have found that Task-oriented Dialogue (ToD) systems optimized toward a user simulator could better satisfy the need of human users.
However, this might result in a sub-optimal ToD system if it is tailored to only one \textit{ad hoc} user simulator, since human users can behave differently.
In this paper, we propose a framework called MUST to optimize ToD systems via leveraging Multiple User SimulaTors.
The main challenges of implementing MUST fall in 1) how to adaptively determine which user simulator to interact with the ToD system at each optimization step, since the ToD system might be over-fitted to some specific user simulators, and simultaneously under-fitted to some others; 2) how to avoid catastrophic forgetting of the adaption for a simulator that is not selected for several consecutive optimization steps.
To tackle these challenges, we formulate MUST as a Multi-armed bandits (MAB) problem and provide a method called MUST$_{\mathrm{adaptive}}$ that balances
\textit{i}) the \textit{boosting adaption} for adaptive interactions between different user simulators and the ToD system and
\textit{ii}) the \textit{uniform adaption} to avoid the catastrophic forgetting issue.
With both automatic evaluations and human evaluations, our experimental results on MultiWOZ show that the dialogue system trained by MUST achieves a better performance than those trained by a single user simulator. It also has a better generalization ability when testing with unseen user simulators.