Training Private Models That Know What They Don't Know

05/28/2023
by   Stephan Rabanser, et al.
0

Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge. This challenge is further exacerbated when learning has to be differentially private: protection provided to sensitive data comes at the price of injecting additional randomness into the learning process. In this work, we conduct a thorough empirical investigation of selective classifiers – that can abstain when they are unsure – under a differential privacy constraint. We find that several popular selective prediction approaches are ineffective in a differentially private setting as they increase the risk of privacy leakage. At the same time, we identify that a recent approach that only uses checkpoints produced by an off-the-shelf private learning algorithm stands out as particularly suitable under DP. Further, we show that differential privacy does not just harm utility but also degrades selective classification performance. To analyze this effect across privacy levels, we propose a novel evaluation mechanism which isolate selective prediction performance across model utility levels. Our experimental results show that recovering the performance level attainable by non-private models is possible but comes at a considerable coverage cost as the privacy budget decreases.

READ FULL TEXT
research
10/11/2021

Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference

Differentially private training algorithms provide protection against on...
research
01/30/2023

Equivariant Differentially Private Deep Learning

The formal privacy guarantee provided by Differential Privacy (DP) bound...
research
01/11/2022

Feature Space Hijacking Attacks against Differentially Private Split Learning

Split learning and differential privacy are technologies with growing po...
research
04/15/2022

Just Fine-tune Twice: Selective Differential Privacy for Large Language Models

With the increasing adoption of NLP models in real-world products, it be...
research
09/18/2019

VideoDP: A Universal Platform for Video Analytics with Differential Privacy

Massive amounts of video data are ubiquitously generated in personal dev...
research
12/13/2021

Differentially Private Data Publication with Multi-level Data Utility

Conventional private data publication mechanisms aim to retain as much d...
research
06/09/2023

Differentially Private Sharpness-Aware Training

Training deep learning models with differential privacy (DP) results in ...

Please sign up or login with your details

Forgot password? Click here to reset