Adaptive Fusion Techniques for Effective Multimodal Deep Learning

Loading...
Thumbnail Image

Date

2020-08-28

Authors

Sahu, Gaurav

Advisor

Vechtomova, Olga

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Effective fusion of data from multiple modalities, such as video, speech, and text, is a challenging task due to the heterogeneous nature of multimodal data. In this work, we propose fusion techniques that aim to model context from different modalities effectively. Instead of defining a deterministic fusion operation, such as concatenation, for the network, we let the network decide “how” to combine given multimodal features more effectively. We propose two networks: 1) Auto-Fusion network, which aims to compress information from different modalities while preserving the context, and 2) GAN-Fusion, which regularizes the learned latent space given context from complementing modalities. A quantitative evaluation on the tasks of multimodal machine translation and emotion recognition suggests that our adaptive networks can better model context from other modalities than all existing methods, many of which employ massive transformer-based networks.

Description

Keywords

multimodal deep learning, multimodal fusion, generative adversarial networks, multimodal machine translation, speech emotion recognition

LC Keywords

Citation