Random depthwise signed convolutional neural networks

06/15/2018
by   Yunzhe Xue, et al.
0

Random weights in convolutional neural networks have shown promising results in previous studies yet remain below par compared to trained networks on image benchmarks. We explore depthwise convolutional neural networks with thousands of random filters in each layer, the sign activation function in between layers, and training performed only at the last layer with a linear support vector machine. We show that our network attains higher accuracies than previous random networks and is comparable to trained large networks on large images from the STL10 and ImageNet benchmarks. Since our network lacks a gradient due to the sign activation it is not possible to produce gradient-based adversarial examples targeting it. We show that our network is also less affected by gradient based adversarial examples produced from state of the art networks that considerably hamper their performance. As a possible explanation for our network's accuracy with random weights we show that the the margin of the linear support vector machine is larger on our final representation compared to the original dataset and increases with the number of random filters. Our network is simple and fast to train and predict, attains high classification accuracy particularly on large images, is hard to attack with adversarial examples, and is less affected by gradient based adversarial examples compared to state of the art networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset