Universal Approximation by a Slim Network with Sparse Shortcut Connections

11/22/2018
by   Fenglei Fan, et al.
0

Over recent years, deep learning has become a mainstream method in machine learning. More advanced networks are being actively developed to solve real-world problems in many important areas. Among successful features of network architectures, shortcut connections are well established to take the outputs of earlier layers as the inputs to later layers, and produce excellent results such as in ResNet and DenseNet. Despite the power of shortcuts, there remain important questions on the underlying mechanism and associated functionalities. For example, will adding shortcuts lead to a more compact structure? How to use shortcuts for an optimal efficiency and capacity of the network model? Along this direction, here we demonstrate that given only one neuron in each layer, the shortcuts can be sparsely placed to let the slim network become an universal approximator. Potentially, our theoretically-guaranteed sparse network model can achieve a learning performance comparable to densely-connected networks on well-known benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2018

ResNet with one-neuron hidden layers is a Universal Approximator

We demonstrate that a very deep ResNet with stacked modules with one neu...
research
04/09/2021

CondenseNet V2: Sparse Feature Reactivation for Deep Networks

Reusing features in deep networks through dense connectivity is an effec...
research
11/17/2016

DelugeNets: Deep Networks with Efficient and Flexible Cross-layer Information Inflows

Deluge Networks (DelugeNets) are deep neural networks which efficiently ...
research
11/23/2017

Deep Expander Networks: Efficient Deep Networks from Graph Theory

Deep Neural Networks, while being unreasonably effective for several vis...
research
08/28/2021

ThresholdNet: Pruning Tool for Densely Connected Convolutional Networks

Deep neural networks have made significant progress in the field of comp...
research
04/15/2022

Universal approximation property of invertible neural networks

Invertible neural networks (INNs) are neural network architectures with ...
research
10/30/2021

Neural Network based on Automatic Differentiation Transformation of Numeric Iterate-to-Fixedpoint

This work proposes a Neural Network model that can control its depth usi...

Please sign up or login with your details

Forgot password? Click here to reset