[Industry] Accurate Training of Web-based Question Answering Systems with Feedback from Ranked Users
Liang Wang, Ivano Lauriola, Alessandro Moschitti
Industry: Industry Industry Paper
Session 6: Industry (Oral)
Conference Room: Pier 4&5
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Session 6 (13:00-14:30 UTC)
TLDR:
Recent work has shown that large-scale annotated datasets are essential for training state-of-the-art Question Answering (QA) models.
Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback da...
You can open the
#paper-I189
channel in a separate window.
Abstract:
Recent work has shown that large-scale annotated datasets are essential for training state-of-the-art Question Answering (QA) models.
Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback data collected from deployed QA systems.
This data can be collected from tens of millions of user with no additional cost, for real-world QA services, e.g., Alexa, Google Home, and etc. The main drawback is the noise affecting feedback on individual examples.
Recent literature on QA systems has shown the benefit of training models even with noisy feedback. However, these studies have multiple limitations: (i) they used uniform random noise to simulate feedback responses, which is typically an unrealistic approximation as noise follows specific patterns, depending on target examples and users; and (ii) they do not show how to aggregate feedback for improving training signals.
In this paper, we first collect a large scale (16M) QA dataset with real feedback sampled from the QA traffic of a popular Virtual Assistant.
Second, we use this data to develop two strategies for filtering unreliable users and thus de-noise feedback: (i) ranking users with an automatic classifier, and (ii) aggregating feedback over similar instances and comparing users between each other. Finally, we train QA models on our filtered feedback data, showing a significant improvement over the state of the art.