Sahu, Gaurav2020-08-282020-08-282020-08-282020-08-18http://hdl.handle.net/10012/16194Effective fusion of data from multiple modalities, such as video, speech, and text, is a challenging task due to the heterogeneous nature of multimodal data. In this work, we propose fusion techniques that aim to model context from different modalities effectively. Instead of defining a deterministic fusion operation, such as concatenation, for the network, we let the network decide “how” to combine given multimodal features more effectively. We propose two networks: 1) Auto-Fusion network, which aims to compress information from different modalities while preserving the context, and 2) GAN-Fusion, which regularizes the learned latent space given context from complementing modalities. A quantitative evaluation on the tasks of multimodal machine translation and emotion recognition suggests that our adaptive networks can better model context from other modalities than all existing methods, many of which employ massive transformer-based networks.enmultimodal deep learningmultimodal fusiongenerative adversarial networksmultimodal machine translationspeech emotion recognitionAdaptive Fusion Techniques for Effective Multimodal Deep LearningMaster Thesis