Probing Physical Reasoning with Counter-Commonsense Context
Kazushi Kondo, Saku Sugawara, Akiko Aizawa
Main: Question Answering Main-poster Paper
Poster Session 6: Question Answering (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Poster Session 6 (13:00-14:30 UTC)
Keywords:
commonsense qa
TLDR:
In this study, we create a CConS (Counter-commonsense Contextual Size comparison) dataset to investigate how physical commonsense affects the contextualized size comparison task; the proposed dataset consists of both contexts that fit physical commonsense and those that do not.
This dataset tests th...
You can open the
#paper-P3302
channel in a separate window.
Abstract:
In this study, we create a CConS (Counter-commonsense Contextual Size comparison) dataset to investigate how physical commonsense affects the contextualized size comparison task; the proposed dataset consists of both contexts that fit physical commonsense and those that do not.
This dataset tests the ability of language models to predict the size relationship between objects under various contexts generated from our curated noun list and templates.
We measure the ability of several masked language models and encoder-decoder models.
The results show that while large language models can use prepositions such as "in" and "into" in the provided context to infer size relationships, they fail to use verbs and thus make incorrect judgments led by their prior physical commonsense.