A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks

11/27/2020
by   Haojing Shen, et al.
14

This paper shows a Min-Max property existing in the connection weights of the convolutional layers in a neural network structure, i.e., the LeNet. Specifically, the Min-Max property means that, during the back propagation-based training for LeNet, the weights of the convolutional layers will become far away from their centers of intervals, i.e., decreasing to their minimum or increasing to their maximum. From the perspective of uncertainty, we demonstrate that the Min-Max property corresponds to minimizing the fuzziness of the model parameters through a simplified formulation of convolution. It is experimentally confirmed that the model with the Min-Max property has a stronger adversarial robustness, thus this property can be incorporated into the design of loss function. This paper points out a changing tendency of uncertainty in the convolutional layers of LeNet structure, and gives some insights to the interpretability of convolution.

READ FULL TEXT
research
01/15/2023

Min-Max-Jump distance and its applications

A new distance metric called Min-Max-Jump distance (MMJ distance) is pro...
research
02/19/2021

Going beyond p-convolutions to learn grayscale morphological operators

Integrating mathematical morphology operations within deep neural networ...
research
02/12/2021

Min-Max-Plus Neural Networks

We present a new model of neural networks called Min-Max-Plus Neural Net...
research
02/12/2019

The Complexity of Max-Min k-Partitioning

In this paper we study a max-min k-partition problem on a weighted graph...
research
02/28/2019

CircConv: A Structured Convolution with Low Complexity

Deep neural networks (DNNs), especially deep convolutional neural networ...
research
06/16/2021

Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework

Enforcing orthogonality in neural networks is an antidote for gradient v...
research
06/28/2023

Reduce Computational Complexity for Convolutional Layers by Skipping Zeros

Deep neural networks rely on parallel processors for acceleration. To de...

Please sign up or login with your details

Forgot password? Click here to reset