-
Mish: A Self Regularized Non-Monotonic Neural Activation Function
The concept of non-linearity in a Neural Network is introduced by an act...
read it
-
Variations on the Chebyshev-Lagrange Activation Function
We seek to improve the data efficiency of neural networks and present no...
read it
-
Highly Accurate CNN Inference Using Approximate Activation Functions over Homomorphic Encryption
In the big data era, cloud-based machine learning as a service (MLaaS) h...
read it
-
Privacy-Preserving Deep Learning for any Activation Function
This paper considers the scenario that multiple data owners wish to appl...
read it
-
Reducing ReLU Count for Privacy-Preserving CNN Speedup
Privacy-Preserving Machine Learning algorithms must balance classificati...
read it
-
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions
Neural networks provide better prediction performance than previous tech...
read it
-
Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
We propose a novel activation function that implements piece-wise orthog...
read it
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks
Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. In order to address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require the polynomial coefficients to be in a finite field. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best second-degree polynomial that can replace the ReLU function in deep neural networks. We instead propose a second-degree polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-10 dataset show that our proposed polynomial activation function significantly outperforms the square activation function.
READ FULL TEXT
Comments
There are no comments yet.