Differentially Private Continual Learning

02/18/2019
by   Sebastian Farquhar, et al.
6

Catastrophic forgetting can be a significant problem for institutions that must delete historic data for privacy reasons. For example, hospitals might not be able to retain patient data permanently. But neural networks trained on recent data alone will tend to forget lessons learned on old data. We present a differentially private continual learning framework based on variational inference. We estimate the likelihood of past data given the current model using differentially private generative models of old datasets.

READ FULL TEXT

page 1

page 2

page 3

research
11/23/2020

Differentially Private Learning Needs Better Features (or Much More Data)

We demonstrate that differentially private machine learning has not yet ...
research
02/24/2019

Efficient Private Algorithms for Learning Halfspaces

We present new differentially private algorithms for learning a large-ma...
research
10/27/2016

Differentially Private Variational Inference for Non-conjugate Models

Many machine learning applications are based on data collected from peop...
research
05/27/2020

Benchmarking Differentially Private Residual Networks for Medical Imagery

Hospitals and other medical institutions often have vast amounts of medi...
research
09/01/2022

Ensembling Neural Networks for Improved Prediction and Privacy in Early Diagnosis of Sepsis

Ensembling neural networks is a long-standing technique for improving th...
research
06/14/2017

Differentially Private Learning of Undirected Graphical Models using Collective Graphical Models

We investigate the problem of learning discrete, undirected graphical mo...
research
03/24/2022

Continual Learning and Private Unlearning

As intelligent agents become autonomous over longer periods of time, the...

Please sign up or login with your details

Forgot password? Click here to reset