Measuring Progress in Fine-grained Vision-and-Language Understanding

Emanuele Bugliarello, Laurent Sartran, Aishwarya Agrawal, Lisa Anne Hendricks, Aida Nematzadeh

Main: Language Grounding to Vision, Robotics, and Beyond Main-poster Paper

Poster Session 1: Language Grounding to Vision, Robotics, and Beyond (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Poster Session 1 (15:00-16:30 UTC)
Keywords: cross-modal pretraining
TLDR: While pretraining on large-scale image–text data from the Web has facilitated rapid progress on many vision-and-language (V\&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images. ...
You can open the #paper-P3977 channel in a separate window.
Abstract: While pretraining on large-scale image–text data from the Web has facilitated rapid progress on many vision-and-language (V\&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images. This has resulted in an increased interest in the community to either develop new benchmarks or models for such capabilities. To better understand and quantify progress in this direction, we investigate four competitive V\&L models on four fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al., 2022) consistently outperforms other baselines, and that modelling innovations can impact performance more than scaling Web data, which even degrades performance sometimes. Through a deeper investigation of X-VLM, we highlight the importance of both novel losses and rich data sources for learning fine-grained skills. Finally, we inspect training dynamics, and discover that for some tasks, performance peaks early in training or significantly fluctuates, never converging.