Does AI Remember? Neural Networks and the Right to be Forgotten

Loading...
Thumbnail Image

Date

2020-04-14

Authors

Graves, Laura
Nagisetty, Vineel
Ganesh, Vijay

Advisor

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The Right to be Forgotten is part of the recently enacted General Data Protection Regulation law that affects any data holder that has data on European Union residents. It gives EU residents the ability to request deletion of their data. This includes training records used to train any machine learning model that data holders might own. In particular, deep neural network models are vulnerable to model inversion attacks which extract class information from a trained model. If a malicious party can mount an attack and learn private information that was meant to be forgotten, then it implies that the model owner has not properly protected their user's rights and may not be compliant with the General Data Protection Regulation law. We present a general threat model to show that simply removing training data is insufficient to protect users. We further propose and evaluate three defense mechanisms (deemed neuron removal, scattered unlearning, and class unlearning) that could help model owners protect themselves against such attacks while being compliant with regulations. We show that these defense mechanisms enable deep neural networks to forget sensitive data from trained models while maintaining model efficacy. A copy of our code, which can be used to replicate our results, can be found at http://tiny.cc/forgetfulnet.

Description

Keywords

Machine Learning, AI Security, Standards Compliance

LC Subject Headings

Citation