Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning

07/26/2021
by   Karthik Garimella, et al.
0

Privacy concerns in client-server machine learning have given rise to private inference (PI), where neural inference occurs directly on encrypted inputs. PI protects clients' personal data and the server's intellectual property. A common practice in PI is to use garbled circuits to compute nonlinear functions privately, namely ReLUs. However, garbled circuits suffer from high storage, bandwidth, and latency costs. To mitigate these issues, PI-friendly polynomial activation functions have been employed to replace ReLU. In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks? We explore this question by analyzing the challenges of substituting ReLUs with polynomials, starting with simple drop-and-replace solutions to novel, more involved replace-and-retrain strategies. We examine the limitations of each method and provide commentary on the use of polynomial activation functions for PI. We find all evaluated solutions suffer from the escaping activation problem: forward activation values inevitably begin to expand at an exponential rate away from stable regions of the polynomials, which leads to exploding values (NaNs) or poor approximations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2020

On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks

Outsourcing neural network inference tasks to an untrusted cloud raises ...
research
03/05/2022

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

Multiparty computation approaches to secure neural network inference tra...
research
11/05/2021

Fighting COVID-19 in the Dark: Methodology for Improved Inference Using Homomorphically Encrypted DNN

Privacy-preserving deep neural network (DNN) inference is a necessity in...
research
06/14/2023

Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions

Machine Learning as a Service (MLaaS) is an increasingly popular design ...
research
08/12/2019

nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data

In previous work, Boemer et al. introduced nGraph-HE, an extension to th...
research
02/05/2019

Stabilizing Inputs to Approximated Nonlinear Functions for Inference with Homomorphic Encryption in Deep Neural Networks

Leveled Homomorphic Encryption (LHE) offers a potential solution that co...
research
05/22/2023

DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation

Formal certification of Neural Networks (NNs) is crucial for ensuring th...

Please sign up or login with your details

Forgot password? Click here to reset