-
Mish: A Self Regularized Non-Monotonic Neural Activation Function
The concept of non-linearity in a Neural Network is introduced by an act...
read it
-
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
We propose ARIANN, a low-interaction framework to perform private traini...
read it
-
Activate or Not: Learning Customized Activation
Modern activation layers use non-linear functions to activate the neuron...
read it
-
Obscure: Information-Theoretically Secure, Oblivious, and Verifiable Aggregation Queries on Secret-Shared Outsourced Data – Full Version
Despite exciting progress on cryptography, secure and efficient query pr...
read it
-
Secure Approximation Guarantee for Cryptographically Private Empirical Risk Minimization
Privacy concern has been increasingly important in many machine learning...
read it
-
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions
Neural networks provide better prediction performance than previous tech...
read it
-
Effectiveness of MPC-friendly Softmax Replacement
Softmax is widely used in deep learning to map some representation to a ...
read it
S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training
We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary. Previous work in this area has been limited to just one specific activation function: ReLU, rendering the approach impractical for many use-cases. For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner. The ability to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid, etc., allows us to use previously written NNs without modification, vastly reducing developer effort and complexity of code. In recent times, ReLU has been found to converge much faster and be more computationally efficient as compared to non-linear functions like sigmoid or tanh. However, we argue that it would be remiss not to extend the mechanism to non-linear functions such as the logistic sigmoid, tanh, and softmax that are fundamental due to their ability to express outputs as probabilities and their universal approximation property. Their contribution in RNNs and a few recent advancements also makes them more relevant.
READ FULL TEXT
Comments
There are no comments yet.