Photonic Differential Privacy with Direct Feedback Alignment

06/07/2021
by   Ruben Ohana, et al.
7

Optical Processing Units (OPUs) – low-power photonic chips dedicated to large scale random projections – have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation. Here, we demonstrate how to leverage the intrinsic noise of optical random projections to build a differentially private DFA mechanism, making OPUs a solution of choice to provide a private-by-design training. We provide a theoretical analysis of our adaptive privacy mechanism, carefully measuring how the noise of optical random projections propagates in the process and gives rise to provable Differential Privacy. Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2020

Differentially Private Deep Learning with Direct Feedback Alignment

Standard methods for differentially private training of deep neural netw...
research
06/02/2019

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM)...
research
03/04/2020

Privacy-preserving Learning via Deep Net Pruning

This paper attempts to answer the question whether neural network prunin...
research
09/18/2017

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

In this paper, we focus on developing a novel mechanism to preserve diff...
research
06/02/2020

Light-in-the-loop: using a photonics co-processor for scalable training of neural networks

As neural networks grow larger and more complex and data-hungry, trainin...
research
05/08/2023

Differentially Private Attention Computation

Large language models (LLMs) have had a profound impact on numerous aspe...
research
06/04/2023

Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?

Feedback alignment algorithms are an alternative to backpropagation to t...

Please sign up or login with your details

Forgot password? Click here to reset