UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Adaptive Fusion Techniques for Effective Multimodal Deep Learning

Loading...
Thumbnail Image

Date

2020-08-28

Authors

Sahu, Gaurav

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Effective fusion of data from multiple modalities, such as video, speech, and text, is a challenging task due to the heterogeneous nature of multimodal data. In this work, we propose fusion techniques that aim to model context from different modalities effectively. Instead of defining a deterministic fusion operation, such as concatenation, for the network, we let the network decide “how” to combine given multimodal features more effectively. We propose two networks: 1) Auto-Fusion network, which aims to compress information from different modalities while preserving the context, and 2) GAN-Fusion, which regularizes the learned latent space given context from complementing modalities. A quantitative evaluation on the tasks of multimodal machine translation and emotion recognition suggests that our adaptive networks can better model context from other modalities than all existing methods, many of which employ massive transformer-based networks.

Description

Keywords

multimodal deep learning, multimodal fusion, generative adversarial networks, multimodal machine translation, speech emotion recognition

LC Keywords

Citation