Show simple item record

dc.contributor.authorLi, Xinda 14:27:30 (GMT) 14:27:30 (GMT)
dc.description.abstractFederated Learning (FL) allows multiple participants to collaboratively train a deep learning model without sharing their private training data. However, due to its distributive nature, FL is vulnerable to various poisoning attacks. An adversary can submit malicious model updates that aim to degrade the joint model's utility. In this thesis, we formulate the adversary's goal as an optimization problem and present an effective model poisoning attack using projected gradient descent. Our empirical results show that our attack has a larger impact on the global model's accuracy than previous attacks. Motivated by this, we design a robust defense algorithm that mitigates existing poisoning attacks. Our defense leverages constraint k-means clustering and uses a small validation dataset for the server to select optimal updates in each FL round. We conduct experiments on three non-iid image classification datasets and demonstrate the robustness of our defense algorithm under various FL settings.en
dc.publisherUniversity of Waterlooen
dc.titleImproved Model Poisoning Attacks and Defenses in Federated Learning with Clusteringen
dc.typeMaster Thesisen
dc.pendingfalse R. Cheriton School of Computer Scienceen Scienceen of Waterlooen
uws-etd.degreeMaster of Mathematicsen
uws.contributor.advisorKerschbaum, Florian
uws.contributor.affiliation1Faculty of Mathematicsen

Files in this item


This item appears in the following Collection(s)

Show simple item record


University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages