DeltaGrad: Rapid retraining of machine learning models

06/26/2020
by   Yinjun Wu, et al.
0

Machine learning models are not static and may need to be retrained on slightly changed datasets, for instance, with the addition or deletion of a set of data points. This has many applications, including privacy, robustness, bias reduction, and uncertainty quantifcation. However, it is expensive to retrain models from scratch. To address this problem, we propose the DeltaGrad algorithm for rapid retraining machine learning models based on information cached during the training phase. We provide both theoretical and empirical support for the effectiveness of DeltaGrad, and show that it compares favorably to the state of the art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2022

An Overview and Prospective Outlook on Robust Training and Certification of Machine Learning Models

In this discussion paper, we survey recent research surrounding robustne...
research
09/14/2022

Data Privacy and Trustworthy Machine Learning

The privacy risks of machine learning models is a major concern when tra...
research
07/11/2019

Making AI Forget You: Data Deletion in Machine Learning

Intense recent discussions have focused on how to provide individuals wi...
research
06/29/2022

Approximate Data Deletion in Generative Models

Users have the right to have their data deleted by third-party learned s...
research
02/07/2022

Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning

Privacy attacks on machine learning models aim to identify the data that...
research
04/05/2023

Unfolded Self-Reconstruction LSH: Towards Machine Unlearning in Approximate Nearest Neighbour Search

Approximate nearest neighbour (ANN) search is an essential component of ...
research
01/20/2020

Model Reuse with Reduced Kernel Mean Embedding Specification

Given a publicly available pool of machine learning models constructed f...

Please sign up or login with your details

Forgot password? Click here to reset