Piecewise Linear Activation Functions For More Efficient Deep Networks

11/11/2015
by   Cheng-Yang Fu, et al.
0

This submission has been withdrawn by arXiv administrators because it is intentionally incomplete, which is in violation of our policies.

READ FULL TEXT
research
06/17/2021

Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks

We have proposed orthogonal-Padé activation functions, which are trainab...
research
11/27/2022

Neural Network Verification as Piecewise Linear Optimization: Formulations for the Composition of Staircase Functions

We present a technique for neural network verification using mixed-integ...
research
06/11/2020

On the asymptotics of wide networks with polynomial activations

We consider an existing conjecture addressing the asymptotic behavior of...
research
07/07/2019

Towards Robust, Locally Linear Deep Networks

Deep networks realize complex mappings that are often understood by thei...
research
03/07/2022

Singular Value Perturbation and Deep Network Optimization

We develop new theoretical results on matrix perturbation to shed light ...
research
01/21/2023

Limitations of Piecewise Linearity for Efficient Robustness Certification

Certified defenses against small-norm adversarial examples have received...
research
07/15/2019

Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks

The performance of deep network learning strongly depends on the choice ...

Please sign up or login with your details

Forgot password? Click here to reset