A Language-First Approach for Procedure Planning
Jiateng Liu, Sha Li, Zhenhailong Wang, Manling Li, Heng Ji
Findings: Language Grounding to Vision, Robotics, and Beyond Findings Paper
Session 4: Language Grounding to Vision, Robotics, and Beyond (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
cross-modal application
TLDR:
Procedure planning, or the ability to predict a series of steps that can achieve a given goal conditioned on the current observation, is critical for building intelligent embodied agents that can assist users in everyday tasks.
Encouraged by the recent success of language models (LMs) for zero-...
You can open the
#paper-P5758
channel in a separate window.
Abstract:
Procedure planning, or the ability to predict a series of steps that can achieve a given goal conditioned on the current observation, is critical for building intelligent embodied agents that can assist users in everyday tasks.
Encouraged by the recent success of language models (LMs) for zero-shot and few-shot planning, we hypothesize that LMs may be equipped with stronger priors for planning compared to their visual counterparts.
To this end, we propose a language-first procedure planning framework with a modularized design: we first align the current and goal observations with corresponding steps and then use a pre-trained LM to predict the intermediate steps.
Under this framework, we find that using an image captioning model for alignment can already match state-of-the-art performance and by designing a double retrieval model conditioned over current and goal observations jointly, we can achieve large improvements (19.2\%-98.9\% relatively higher success rate than state-of-the-art) on both COIN and CrossTask benchmarks. Our work verifies the planning ability of LMs and demonstrates how LMs can serve as a powerful ``reasoning engine'' even when the input is provided in another modality.