Does AI Remember? Neural Networks and the Right to be Forgotten
MetadataShow full item record
The Right to be Forgotten is part of the recently enacted General Data Protection Regulation law that affects any data holder that has data on European Union residents. It gives EU residents the ability to request deletion of their data. This includes training records used to train any machine learning model that data holders might own. In particular, deep neural network models are vulnerable to model inversion attacks which extract class information from a trained model. If a malicious party can mount an attack and learn private information that was meant to be forgotten, then it implies that the model owner has not properly protected their user's rights and may not be compliant with the General Data Protection Regulation law. We present a general threat model to show that simply removing training data is insufficient to protect users. We further propose and evaluate three defense mechanisms (deemed neuron removal, scattered unlearning, and class unlearning) that could help model owners protect themselves against such attacks while being compliant with regulations. We show that these defense mechanisms enable deep neural networks to forget sensitive data from trained models while maintaining model efficacy. A copy of our code, which can be used to replicate our results, can be found at http://tiny.cc/forgetfulnet.
Cite this version of the work
Laura Graves, Vineel Nagisetty, Vijay Ganesh (2020). Does AI Remember? Neural Networks and the Right to be Forgotten. UWSpace. http://hdl.handle.net/10012/15754