Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

by   Harsha Nori, et al.

We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.


page 2

page 9


Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

Deep learning models leak significant amounts of information about their...

Differential Privacy Has Disparate Impact on Model Accuracy

Differential privacy (DP) is a popular mechanism for training machine le...

Differentially Private Estimation of Heterogeneous Causal Effects

Estimating heterogeneous treatment effects in domains such as healthcare...

Age-Dependent Differential Privacy

The proliferation of real-time applications has motivated extensive rese...

Financial Vision Based Differential Privacy Applications

The importance of deep learning data privacy has gained significant atte...

DPSyn: Experiences in the NIST Differential Privacy Data Synthesis Challenges

We summarize the experience of participating in two differential privacy...

Differential Privacy for Power Grid Obfuscation

The availability of high-fidelity energy networks brings significant val...