Grounded physical language understanding with probabilistic programs and simulated worlds
Cedegao Zhang, Lionel Wong, Gabriel Grand, Josh Tenenbaum
1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper
TLDR:
Human language richly invokes our intuitive physical knowledge. We talk about physical objects, scenes, properties, and events; and we can make predictions and draw inferences about physical worlds described entirely in language. Understanding this everyday language requires inherently probabilistic
You can open the
#paper-ACL_87
channel in a separate window.
Abstract:
Human language richly invokes our intuitive physical knowledge. We talk about physical objects, scenes, properties, and events; and we can make predictions and draw inferences about physical worlds described entirely in language. Understanding this everyday language requires inherently probabilistic reasoningover possible physical worlds invoked in language and over uncertainty inherent to those physical worlds. In this paper, we propose PiLoT, a neurosymbolic generative model that translates language into probabilistic programs grounded in a physics engine. Our model integrates a large language model to robustly parse language into program expressions and uses a probabilistic physics engine to support inferences over scenes described in language. We construct a linguistic reasoning benchmark based on prior psychophysics experiments that requires reasoning about physical outcomes based on linguistic scene descriptions. We show that PiLoT well predicts human judgments and outperforms baseline large language models across this battery of tasks.