Denoising Bottleneck with Mutual Information Maximization for Video Multimodal Fusion
Shaoxiang Wu, Damai Dai, Ziwei Qin, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui
Main: Speech and Multimodality Main-poster Paper
Poster Session 7: Speech and Multimodality (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Poster Session 7 (15:00-16:30 UTC)
Keywords:
multimodality
TLDR:
Video multimodal fusion aims to integrate multimodal signals in videos, such as visual, audio and text, to make a complementary prediction with multiple modalities contents.
However, unlike other image-text multimodal tasks, video has longer multimodal sequences with more redundancy and noise in bot...
You can open the
#paper-P3721
channel in a separate window.
Abstract:
Video multimodal fusion aims to integrate multimodal signals in videos, such as visual, audio and text, to make a complementary prediction with multiple modalities contents.
However, unlike other image-text multimodal tasks, video has longer multimodal sequences with more redundancy and noise in both visual and audio modalities.
Prior denoising methods like forget gate are coarse in the granularity of noise filtering.
They often suppress the redundant and noisy information at the risk of losing critical information.
Therefore, we propose a denoising bottleneck fusion (DBF) model for fine-grained video multimodal fusion.
On the one hand, we employ a bottleneck mechanism to filter out noise and redundancy with a restrained receptive field.
On the other hand, we use a mutual information maximization module to regulate the filter-out module to preserve key information within different modalities.
Our DBF model achieves significant improvement over current state-of-the-art baselines on multiple benchmarks covering multimodal sentiment analysis and multimodal summarization tasks.
It proves that our model can effectively capture salient features from noisy and redundant video, audio, and text inputs.
The code for this paper will be publicly available at https://github.com/WSXRHFG/DBF