Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

dc.contributor.authorJeddi, Ahmadreza
dc.date.accessioned2020-08-19T15:16:10Z
dc.date.available2020-08-19T15:16:10Z
dc.date.issued2020-08-19
dc.date.submitted2020-08-06
dc.description.abstractDeep neural networks have been achieving state-of-the-art performance across a wide variety of applications, and due to their outstanding performance, they are being deployed in safety and security critical systems. However, in recent years, deep neural networks have been shown to be very vulnerable to optimally crafted input samples called adversarial examples. Although the adversarial perturbations are imperceptible to humans, especially, in the domain of computer vision, they have been very successful in fooling strong deep models.The vulnerability of deep models to adversarial attacks limits their widespread deployment for safety-critical applications. As a result, adversarial attack and defense algorithms have drawn great attention in the literature. Many defense algorithms have been proposed to overcome the threat of adversarial attacks, and many of these algorithms use adversarial training (adding perturbations during the training stage). Alongside other adversarial defense approaches being investigated, there has been a very recent interest in improving adversarial robustness in deep neural networks through the introduction of perturbations during the training process. However, such methods leverage fixed, pre-defined perturbations and require significant hyper-parameter tuning that makes them very difficult to leverage in a general fashion. In this work, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks. More specifically, we introduce novel perturbation-injection modules that are incorporated at each layer to perturb the feature space and increase uncertainty in the network. This feature perturbation is performed at both the training and the inference stages. Furthermore, inspired by the Expectation-Maximization approach, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively. Experimental results on CIFAR-10 and CIFAR-100 datasets show that the proposed Learn2Perturb method can result in deep neural networks which are 4–7 percent more robust on Linf FGSM and PDG adversarial attacks and significantly outperforms the state-of-the-art against L2 C&W attack and a wide range of well-known black-box attacks.en
dc.identifier.urihttp://hdl.handle.net/10012/16132
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.uriCIFAR-10en
dc.relation.uriCIFAR-100en
dc.subjectcomputer visionen
dc.subjectmachine learningen
dc.subjectadversarial robustnessen
dc.subjecttrustable aien
dc.titleLearn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustnessen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws.contributor.advisorWong, Alexander
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jeddi_Ahmadreza.pdf
Size:
1.46 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: