UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Fair Compression of Machine Learning Vision Systems

Loading...
Thumbnail Image

Date

2023-09-01

Authors

Meyer, Robbie

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Model pruning is a simple and effective method for compressing neural networks. By identifying and removing the least influential parameters of a model, pruning is able to transform networks into smaller, faster networks with minimal impact to overall perfor- mance. However, recent research has shown that while overall performance may not be significantly changed, model pruning can exacerbate existing fairness issues. Subgroups that are underrepresented or complex may experience a greater than average impact from pruning. Machine learning systems that use compressed neural networks may consequently exhibit significant biases that could limit their effectiveness in many real world situations. To address this issue, we analyze the effect on fairness of pruning a variety of image classification models and propose a novel method for improving the fairness of existing pruning methods. By analyzing the fairness impact of pruning in a variety of situations, we further our understanding of the theoretical fairness impact of pruning could manifest in real-world conditions. By developing a method for improving the fairness of pruning methods, we demonstrate that the fairness impact of pruning can be influenced and enable machine learning practitioners to improve the post-pruning fairness of their models. Our analysis revealed that the fairness impact of pruning can be observed in many, but not all, image classification systems that utilize deep learning and pruning. The dataset used to train each model appears to influence how pruning affects the fairness of each model. Models trained and pruned using the CelebA dataset did see a negative impact on fairness while models trained and pruned using the Fitzpatrick17k dataset did not. Manipulating the CelebA and CIFAR-10 datasets to remove or introduce potential sources of bias does affect the fairness impact of pruning. The effect does not appear to be limited to a single pruning method, but different pruning methods do not experience the effect equally. The fairness impact of data-driven pruning can be improved through a simple tweak to the cross-entropy loss. The performance weighted loss function assigns weights to samples based on the performance of the unpruned model and uses the corrected output of the unpruned model as classification targets. These small changes improve the fairness of existing pruning methods with some models. The performance weighted loss function does not appear to be universally beneficial, but it is a useful tool for machine learning practitioners who seek to compress models in fairness sensitive contexts.

Description

Keywords

convolutional neural networks, model compression, neural network pruning, fairness

LC Keywords

Citation