Show simple item record

dc.contributor.authorGhose, Amur
dc.date.accessioned2020-08-27 15:46:14 (GMT)
dc.date.available2020-08-27 15:46:14 (GMT)
dc.date.issued2020-08-27
dc.date.submitted2020-08-20
dc.identifier.urihttp://hdl.handle.net/10012/16169
dc.description.abstractWe present results obtained in the context of generative neural models — specifically autoencoders — utilizing standard results from coding theory. The methods are fairly elementary in principle, yet, combined with the ubiquitous practice of Batch Normalization in these models, yield excellent results when it comes to comparing with rival autoencoding architectures. In particular, we resolve a split that arises when comparing two different types of autoencoding models — VAEs versus regularized deterministic autoencoders — often simply called RAEs (Regularized Auto Encoder). The latter offer superior performance but lose guarantees on their latent space. Further, in the latter, a wide variety of regularizers are applied for excellent performance — ranging from L2 regularization to spectral normalization. We, on the other hand, show that a simple entropy like term suffices to kill two birds with one stone — that of offering good performance while keeping a well behaved latent space. The primary thrust of the thesis exactly consists of a paper presented at UAI 2020 on these matters, titled “Batch norm with entropic regularization turns deterministic autoencoders into generative models”. This was a joint work with Abdullah Rashwan who was at the time with us at Waterloo as a postdoctoral associate, and is now at Google, and my supervisor, Pascal Poupart. This constitutes chapter 2. Extensions on this that relate to batch norm’s interplay with adversarial examples are in chapter 3. An overall overview is presented in chapter 1, which serves jointly as an introduction.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectautoencodersen
dc.subjectmachine learningen
dc.subjectgenerative modelsen
dc.subjectadversarial examplesen
dc.subjectentropyen
dc.titleEntropy-based aggregate posterior alignment techniques for deterministic autoencoders and implications for adversarial examplesen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Mathematicsen
uws.contributor.advisorPoupart, Pascal
uws.contributor.affiliation1Faculty of Mathematicsen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages