Don't Forget Your ABC's: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems
Sarah E. Finch, James D. Finch, Jinho D. Choi
Main: Dialogue and Interactive Systems Main-poster Paper
Poster Session 2: Dialogue and Interactive Systems (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Poster Session 2 (18:00-19:30 UTC)
Keywords:
evaluation and metrics
TLDR:
Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity.
Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with...
You can open the
#paper-P4484
channel in a separate window.
Abstract:
Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity.
Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with a lack of work to compare and assess the validity of those approaches.
The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it.
Thus, a dimensional evaluation of chat-oriented open-domain dialogue systems that reliably measures several aspects of dialogue capabilities is desired.
This paper presents a novel human evaluation method to estimate the rates of many\{pasted macro `LN'\} dialogue system behaviors.
Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches.
The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems.