SAU: Smooth activation function using convolution with approximate identities

09/27/2021
by   Koushik Biswas, et al.
0

Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin. Over the years, many smooth approximations of ReLU have been proposed using various smoothing techniques. We propose new smooth approximations of a non-differentiable activation function by convolving it with approximate identities. In particular, we present smooth approximations of Leaky ReLU and show that they outperform several well-known activation functions in various datasets and models. We call this function Smooth Activation Unit (SAU). Replacing ReLU by SAU, we get 5.12 ShuffleNet V2 (2.0x) model on CIFAR100 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2021

SMU: smooth activation function for deep networks using smoothing maximum technique

Deep learning researchers have a keen interest in proposing two new nove...
research
11/11/2020

On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks

Outsourcing neural network inference tasks to an untrusted cloud raises ...
research
09/10/2020

Activate or Not: Learning Customized Activation

Modern activation layers use non-linear functions to activate the neuron...
research
07/22/2021

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-lay...
research
07/03/2023

Neural Polytopes

We find that simple neural networks with ReLU activation generate polyto...
research
12/13/2020

Using Restricted Boltzmann Machines to Model Molecular Geometries

Precise physical descriptions of molecules can be obtained by solving th...
research
02/07/2023

Efficient Parametric Approximations of Neural Network Function Space Distance

It is often useful to compactly summarize important properties of model ...

Please sign up or login with your details

Forgot password? Click here to reset