DREAM: Improving Situational QA by First Elaborating the Situation
Yuling Gu, Bhavana Dalvi Mishra, Peter Clark
1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper
TLDR:
When people answer questions about a specific situation, e.g., "I cheated on my mid-term exam last week. Was that wrong?, cognitive science suggests that they form a mental picture of that situation before answering. While we do not know how language models (LMs) answer such questions, we conjecture
You can open the
#paper-ACL_77
channel in a separate window.
Abstract:
When people answer questions about a specific situation, e.g., "I cheated on my mid-term exam last week. Was that wrong?, cognitive science suggests that they form a mental picture of that situation before answering. While we do not know how language models (LMs) answer such questions, we conjecture that they may answer more accurately if they are also provided with additional details about the question situation, elaborating the "scene. To test this conjecture, we train a new model, DREAM, to answer questions that elaborate the scenes that situated questions are about, and then provide those elaborations as additional context to a question-answering (QA) model. We find that DREAM is able to create better scene elaborations (more accurate, useful, and consistent) than a representative state-of-the-art, zero-shot model (Macaw). We also find that using the scene elaborations as additional context improves the answer accuracy of a downstream QA system, including beyond that obtainable by simply further fine-tuning the QA system on DREAM's training data. These results suggest that adding focused elaborations about a situation can improve a system's reasoning about it, and may serve as an effective way of injecting new scenario-based knowledge into QA models.