On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks

11/11/2020
by   Ramy E. Ali, et al.
0

Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. In order to address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require the polynomial coefficients to be in a finite field. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best second-degree polynomial that can replace the ReLU function in deep neural networks. We instead propose a second-degree polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-10 dataset show that our proposed polynomial activation function significantly outperforms the square activation function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2021

SAU: Smooth activation function using convolution with approximate identities

Well-known activation functions like ReLU or Leaky ReLU are non-differen...
research
07/26/2021

Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning

Privacy concerns in client-server machine learning have given rise to pr...
research
11/05/2021

Fighting COVID-19 in the Dark: Methodology for Improved Inference Using Homomorphically Encrypted DNN

Privacy-preserving deep neural network (DNN) inference is a necessity in...
research
09/10/2018

Privacy-Preserving Deep Learning for any Activation Function

This paper considers the scenario that multiple data owners wish to appl...
research
01/18/2022

AESPA: Accuracy Preserving Low-degree Polynomial Activation for Fast Private Inference

Hybrid private inference (PI) protocol, which synergistically utilizes b...
research
06/24/2019

Variations on the Chebyshev-Lagrange Activation Function

We seek to improve the data efficiency of neural networks and present no...
research
09/08/2020

Highly Accurate CNN Inference Using Approximate Activation Functions over Homomorphic Encryption

In the big data era, cloud-based machine learning as a service (MLaaS) h...

Please sign up or login with your details

Forgot password? Click here to reset