Random depthwise signed convolutional neural networks

06/15/2018
by   Yunzhe Xue, et al.
0

Random weights in convolutional neural networks have shown promising results in previous studies yet remain below par compared to trained networks on image benchmarks. We explore depthwise convolutional neural networks with thousands of random filters in each layer, the sign activation function in between layers, and training performed only at the last layer with a linear support vector machine. We show that our network attains higher accuracies than previous random networks and is comparable to trained large networks on large images from the STL10 and ImageNet benchmarks. Since our network lacks a gradient due to the sign activation it is not possible to produce gradient-based adversarial examples targeting it. We show that our network is also less affected by gradient based adversarial examples produced from state of the art networks that considerably hamper their performance. As a possible explanation for our network's accuracy with random weights we show that the the margin of the linear support vector machine is larger on our final representation compared to the original dataset and increases with the number of random filters. Our network is simple and fast to train and predict, attains high classification accuracy particularly on large images, is hard to attack with adversarial examples, and is less affected by gradient based adversarial examples compared to state of the art networks.

READ FULL TEXT
research
04/08/2021

A single gradient step finds adversarial examples on random two-layers neural networks

Daniely and Schacham recently showed that gradient descent finds adversa...
research
12/08/2018

Detecting Adversarial Examples in Convolutional Neural Networks

The great success of convolutional neural networks has caused a massive ...
research
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
research
11/27/2021

Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and ...
research
06/05/2019

Enhancing Gradient-based Attacks with Symbolic Intervals

Recent breakthroughs in defenses against adversarial examples, like adve...
research
09/01/2020

Defending against substitute model black box adversarial attacks with the 01 loss

Substitute model black box attacks can create adversarial examples for a...
research
12/19/2016

On Random Weights for Texture Generation in One Layer Neural Networks

Recent work in the literature has shown experimentally that one can use ...

Please sign up or login with your details

Forgot password? Click here to reset