VLN-Trans: Translator for the Vision and Language Navigation Agent
Yue Zhang, Parisa Kordjamshidi
Main: Language Grounding to Vision, Robotics, and Beyond Main-oral Paper
Session 2: Language Grounding to Vision, Robotics, and Beyond (Oral)
Conference Room: Pier 4&5
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Session 2 (18:00-19:30 UTC)
Keywords:
vision language navigation, cross-modal pretraining, cross-modal content generation, cross-modal application
TLDR:
Language understanding is essential for the navigation agent to follow instructions. We observe two kinds of issues in the instructions that can make the navigation task challenging:
1. The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the...
You can open the
#paper-P5738
channel in a separate window.
Abstract:
Language understanding is essential for the navigation agent to follow instructions. We observe two kinds of issues in the instructions that can make the navigation task challenging:
1. The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent. 2. The mentioned landmarks are applicable to multiple targets, thus not distinctive for selecting the target among the candidate viewpoints.
To deal with these issues, we design a translator module for the navigation agent to convert the original instructions into easy-to-follow sub-instruction representations at each step. The translator needs to focus on the recognizable and distinctive landmarks based on the agent's visual abilities and the observed visual environment.
To achieve this goal, we create a new synthetic sub-instruction dataset and design specific tasks to train the translator and the navigation agent.
We evaluate our approach on Room2Room~(R2R), Room4room~(R4R), and Room2Room Last (R2R-Last) datasets and achieve state-of-the-art results on multiple benchmarks.