Show simple item record

dc.contributor.authorMeyer, Robbie
dc.date.accessioned2023-09-01 13:45:06 (GMT)
dc.date.available2023-09-01 13:45:06 (GMT)
dc.date.issued2023-09-01
dc.date.submitted2023-08-26
dc.identifier.urihttp://hdl.handle.net/10012/19828
dc.description.abstractModel pruning is a simple and effective method for compressing neural networks. By identifying and removing the least influential parameters of a model, pruning is able to transform networks into smaller, faster networks with minimal impact to overall perfor- mance. However, recent research has shown that while overall performance may not be significantly changed, model pruning can exacerbate existing fairness issues. Subgroups that are underrepresented or complex may experience a greater than average impact from pruning. Machine learning systems that use compressed neural networks may consequently exhibit significant biases that could limit their effectiveness in many real world situations. To address this issue, we analyze the effect on fairness of pruning a variety of image classification models and propose a novel method for improving the fairness of existing pruning methods. By analyzing the fairness impact of pruning in a variety of situations, we further our understanding of the theoretical fairness impact of pruning could manifest in real-world conditions. By developing a method for improving the fairness of pruning methods, we demonstrate that the fairness impact of pruning can be influenced and enable machine learning practitioners to improve the post-pruning fairness of their models. Our analysis revealed that the fairness impact of pruning can be observed in many, but not all, image classification systems that utilize deep learning and pruning. The dataset used to train each model appears to influence how pruning affects the fairness of each model. Models trained and pruned using the CelebA dataset did see a negative impact on fairness while models trained and pruned using the Fitzpatrick17k dataset did not. Manipulating the CelebA and CIFAR-10 datasets to remove or introduce potential sources of bias does affect the fairness impact of pruning. The effect does not appear to be limited to a single pruning method, but different pruning methods do not experience the effect equally. The fairness impact of data-driven pruning can be improved through a simple tweak to the cross-entropy loss. The performance weighted loss function assigns weights to samples based on the performance of the unpruned model and uses the corrected output of the unpruned model as classification targets. These small changes improve the fairness of existing pruning methods with some models. The performance weighted loss function does not appear to be universally beneficial, but it is a useful tool for machine learning practitioners who seek to compress models in fairness sensitive contexts.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectconvolutional neural networksen
dc.subjectmodel compressionen
dc.subjectneural network pruningen
dc.subjectfairnessen
dc.titleFair Compression of Machine Learning Vision Systemsen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentSystems Design Engineeringen
uws-etd.degree.disciplineSystem Design Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorWong, Alexander
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages