Differentially Private Domain Adaptation with Theoretical Guarantees

06/15/2023
by   Raef Bassily, et al.
0

In many applications, the labeled data at the learner's disposal is subject to privacy constraints and is relatively limited. To derive a more accurate predictor for the target domain, it is often beneficial to leverage publicly available labeled data from an alternative domain, somewhat close to the target domain. This is the modern problem of supervised domain adaptation from a public source to a private target domain. We present two (ϵ, δ)-differentially private adaptation algorithms for supervised adaptation, for which we make use of a general optimization problem, recently shown to benefit from favorable theoretical learning guarantees. Our first algorithm is designed for regression with linear predictors and shown to solve a convex optimization problem. Our second algorithm is a more general solution for loss functions that may be non-convex but Lipschitz and smooth. While our main objective is a theoretical analysis, we also report the results of several experiments first demonstrating that the non-private versions of our algorithms outperform adaptation baselines and next showing that, for larger values of the target sample size or ϵ, the performance of our private algorithms remains close to that of the non-private formulation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2022

Private Domain Adaptation from a Public Source

A key problem in a variety of applications is that of domain adaptation ...
research
05/10/2023

Best-Effort Adaptation

We study a problem of best-effort adaptation motivated by several applic...
research
07/19/2020

A Theory of Multiple-Source Adaptation with Limited Target Labeled Data

We study multiple-source domain adaptation, when the learner has access ...
research
06/11/2021

TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation

In few-shot domain adaptation (FDA), classifiers for the target domain a...
research
09/03/2019

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

One of the most effective algorithms for differentially private learning...
research
03/04/2021

Remember What You Want to Forget: Algorithms for Machine Unlearning

We study the problem of forgetting datapoints from a learnt model. In th...
research
06/01/2021

Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts

Least squares estimators, when trained on a few target domain samples, m...

Please sign up or login with your details

Forgot password? Click here to reset