Truncating Wide Networks using Binary Tree Architectures

04/03/2017
by   Yan Zhang, et al.
0

Recent study shows that a wide deep network can obtain accuracy comparable to a deeper but narrower network. Compared to narrower and deeper networks, wide networks employ relatively less number of layers and have various important benefits, such that they have less running time on parallel computing devices, and they are less affected by gradient vanishing problems. However, the parameter size of a wide network can be very large due to use of large width of each layer in the network. In order to keep the benefits of wide networks meanwhile improve the parameter size and accuracy trade-off of wide networks, we propose a binary tree architecture to truncate architecture of wide networks by reducing the width of the networks. More precisely, in the proposed architecture, the width is continuously reduced from lower layers to higher layers in order to increase the expressive capacity of network with a less increase on parameter size. Also, to ease the gradient vanishing problem, features obtained at different layers are concatenated to form the output of our architecture. By employing the proposed architecture on a baseline wide network, we can construct and train a new network with same depth but considerably less number of parameters. In our experimental analyses, we observe that the proposed architecture enables us to obtain better parameter size and accuracy trade-off compared to baseline networks using various benchmark image classification datasets. The results show that our model can decrease the classification error of baseline from 20.43 Cifar-100 using only 28 https://github.com/ZhangVision/bitnet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2016

Wide Residual Networks

Deep residual networks were shown to be able to scale up to thousands of...
research
08/25/2016

Densely Connected Convolutional Networks

Recent work has shown that convolutional networks can be substantially d...
research
02/08/2022

Width is Less Important than Depth in ReLU Neural Networks

We solve an open question from Lu et al. (2017), by showing that any tar...
research
03/21/2021

ProgressiveSpinalNet architecture for FC layers

In deeplearning models the FC (fully connected) layer has biggest import...
research
11/30/2020

SplitNet: Divide and Co-training

The width of a neural network matters since increasing the width will ne...
research
01/30/2022

Training Thinner and Deeper Neural Networks: Jumpstart Regularization

Neural networks are more expressive when they have multiple layers. In t...
research
06/07/2021

Representation mitosis in wide neural networks

Deep neural networks (DNNs) defy the classical bias-variance trade-off: ...

Please sign up or login with your details

Forgot password? Click here to reset