R3Net: Random Weights, Rectifier Linear Units and Robustness for Artificial Neural Network

03/12/2018
by   Arun Venkitaraman, et al.
0

We consider a neural network architecture with randomized features, a sign-splitter, followed by rectified linear units (ReLU). We prove that our architecture exhibits robustness to the input perturbation: the output feature of the neural network exhibits a Lipschitz continuity in terms of the input perturbation. We further show that the network output exhibits a discrimination ability that inputs that are not arbitrarily close generate output vectors which maintain distance between each other obeying a certain lower bound. This ensures that two different inputs remain discriminable while contracting the distance in the output feature space.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset