Deep Neural Network Ensembles

04/11/2019
by   Sean Tao, et al.
0

Current deep neural networks suffer from two problems; first, they are hard to interpret, and second, they suffer from overfitting. There have been many attempts to define interpretability in neural networks, but they typically lack causality or generality. A myriad of regularization techniques have been developed to prevent overfitting, and this has driven deep learning to become the hot topic it is today; however, while most regularization techniques are justified empirically and even intuitively, there is not much underlying theory. This paper argues that to extract the features used in neural networks to make decisions, it's important to look at the paths between clusters existing in the hidden spaces of neural networks. These features are of particular interest because they reflect the true decision making process of the neural network. This analysis is then furthered to present an ensemble algorithm for arbitrary neural networks which has guarantees for test accuracy. Finally, a discussion detailing the aforementioned guarantees is introduced and the implications to neural networks, including an intuitive explanation for all current regularization methods, are presented. The ensemble algorithm has generated state-of-the-art results for Wide-ResNet on CIFAR-10 and has improved test accuracy for all models it has been applied to.

READ FULL TEXT
research
03/15/2019

Deep Neural Network Ensembles for Time Series Classification

Deep neural networks have revolutionized many fields such as computer vi...
research
01/27/2018

Towards an Understanding of Neural Networks in Natural-Image Spaces

Two major uncertainties, dataset bias and perturbation, prevail in state...
research
09/10/2019

Techniques All Classifiers Can Learn from Deep Networks: Models, Optimizations, and Regularization

Deep neural networks have introduced novel and useful tools to the machi...
research
02/10/2022

Coded ResNeXt: a network for designing disentangled information paths

To avoid treating neural networks as highly complex black boxes, the dee...
research
10/22/2019

Robust Training with Ensemble Consensus

Since deep neural networks are over-parametrized, they may memorize nois...
research
08/19/2019

On Regularization Properties of Artificial Datasets for Deep Learning

The paper discusses regularization properties of artificial data for dee...
research
05/26/2023

Manifold Regularization for Memory-Efficient Training of Deep Neural Networks

One of the prevailing trends in the machine- and deep-learning community...

Please sign up or login with your details

Forgot password? Click here to reset