LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
Yi Tu, Ya Guo, Huan Chen, jinyang tang
Main: Information Extraction Main-poster Paper
Session 1: Information Extraction (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Keywords:
document-level extraction
TLDR:
Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years.
Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field.
The major challenge is how to fusion the different ...
You can open the
#paper-P799
channel in a separate window.
Abstract:
Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years.
Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field.
The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks.
This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask.
LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning.
LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks.
Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.