The Dangers of trusting Stochastic Parrots: Faithfulness and Trust in Open-domain Conversational Question Answering
Sabrina Chiesurin, Dimitris Dimakopoulos, Marco Antonio Sobrevilla Cabezudo, Arash Eshghi, Ioannis Papaioannou, Verena Rieser, Ioannis Konstas
Findings: Question Answering Findings Paper
Session 1: Question Answering (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan East (Spotlight)
Conference Room: Metropolitan East
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
conversational qa
TLDR:
Large language models are known to produce output which sounds fluent and convincing, but is also often wrong, e.g. ``unfaithful" with respect to a rationale as retrieved from a knowledge base. In this paper, we
show that task-based systems which exhibit certain advanced linguistic dialog behavior...
You can open the
#paper-P4435
channel in a separate window.
Abstract:
Large language models are known to produce output which sounds fluent and convincing, but is also often wrong, e.g. ``unfaithful" with respect to a rationale as retrieved from a knowledge base. In this paper, we
show that task-based systems which exhibit certain advanced linguistic dialog behaviors, such as lexical alignment (repeating what the user said), are in fact preferred and trusted more, whereas other phenomena, such as pronouns and ellipsis are dis-preferred.
We use open-domain question answering systems as our test-bed for task based dialog generation and compare several open- and closed-book models. Our results highlight the danger of systems that appear to be trustworthy by parroting user input while providing an unfaithful response.