DeepAI AI Chat
Log In Sign Up

Complex-valued deep learning with differential privacy

by   Alexander Ziller, et al.

We present ζ-DP, an extension of differential privacy (DP) to complex-valued functions. After introducing the complex Gaussian mechanism, whose properties we characterise in terms of (ε, δ)-DP and Rényi-DP, we present ζ-DP stochastic gradient descent (ζ-DP-SGD), a variant of DP-SGD for training complex-valued neural networks. We experimentally evaluate ζ-DP-SGD on three complex-valued tasks, i.e. electrocardiogram classification, speech classification and magnetic resonance imaging (MRI) reconstruction. Moreover, we provide ζ-DP-SGD benchmarks for a large variety of complex-valued activation functions and on a complex-valued variant of the MNIST dataset. Our experiments demonstrate that DP training of complex-valued neural networks is possible with rigorous privacy guarantees and excellent utility.


page 8

page 9


Have it your way: Individualized Privacy Assignment for DP-SGD

When training a machine learning model with differential privacy, one se...

Three Variants of Differential Privacy: Lossless Conversion and Applications

We consider three different variants of differential privacy (DP), namel...

Personalized DP-SGD using Sampling Mechanisms

Personalized privacy becomes critical in deep learning for Trustworthy A...

Differential Privacy Guarantees for Stochastic Gradient Langevin Dynamics

We analyse the privacy leakage of noisy stochastic gradient descent by m...

DP-Forward: Fine-tuning and Inference on Language Models with Differential Privacy in Forward Pass

Differentially private stochastic gradient descent (DP-SGD) adds noise t...

DP-SGD vs PATE: Which Has Less Disparate Impact on GANs?

Generative Adversarial Networks (GANs) are among the most popular approa...

Exploring the Unfairness of DP-SGD Across Settings

End users and regulators require private and fair artificial intelligenc...