Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

06/17/2021
by   Harsha Nori, et al.
0

We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.

READ FULL TEXT

page 2

page 9

07/22/2020

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

Deep learning models leak significant amounts of information about their...
05/28/2019

Differential Privacy Has Disparate Impact on Model Accuracy

Differential privacy (DP) is a popular mechanism for training machine le...
02/22/2022

Differentially Private Estimation of Heterogeneous Causal Effects

Estimating heterogeneous treatment effects in domains such as healthcare...
09/03/2022

Age-Dependent Differential Privacy

The proliferation of real-time applications has motivated extensive rese...
12/28/2021

Financial Vision Based Differential Privacy Applications

The importance of deep learning data privacy has gained significant atte...
06/24/2021

DPSyn: Experiences in the NIST Differential Privacy Data Synthesis Challenges

We summarize the experience of participating in two differential privacy...
01/21/2019

Differential Privacy for Power Grid Obfuscation

The availability of high-fidelity energy networks brings significant val...