Entropy-based aggregate posterior alignment techniques for deterministic autoencoders and implications for adversarial examples
Loading...
Date
2020-08-27
Authors
Ghose, Amur
Advisor
Poupart, Pascal
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
We present results obtained in the context of generative neural models — specifically autoencoders — utilizing standard results from coding theory. The methods are fairly elementary in principle, yet, combined with the ubiquitous practice of Batch Normalization in these models, yield excellent results when it comes to comparing with rival autoencoding architectures. In particular, we resolve a split that arises when comparing two different types of autoencoding models — VAEs versus regularized deterministic autoencoders — often simply called RAEs (Regularized Auto Encoder). The latter offer superior performance but lose guarantees on their latent space. Further, in the latter, a wide variety of regularizers are applied for excellent performance — ranging from L2 regularization to spectral normalization. We, on the other hand, show that a simple entropy like term suffices to kill two birds with one stone — that of offering good performance while keeping a well behaved latent space.
The primary thrust of the thesis exactly consists of a paper presented at UAI 2020 on these matters, titled “Batch norm with entropic regularization turns deterministic autoencoders into generative models”. This was a joint work with Abdullah Rashwan who was at the time with us at Waterloo as a postdoctoral associate, and is now at Google, and my supervisor, Pascal Poupart. This constitutes chapter 2. Extensions on this that relate to batch norm’s interplay with adversarial examples are in chapter 3. An overall overview is presented in chapter 1, which serves jointly as an introduction.
Description
Keywords
autoencoders, machine learning, generative models, adversarial examples, entropy