DeepAI AI Chat
Log In Sign Up

Building Sparse Deep Feedforward Networks using Tree Receptive Fields

03/14/2018
by   Xiaopeng Li, et al.
0

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/16/2018

Learning Sparse Deep Feedforward Networks via Tree Skeleton Expansion

Despite the popularity of deep learning, structure learning for deep mod...
01/14/2020

Neural Arithmetic Units

Neural networks can approximate complex functions, but they struggle to ...
11/07/2013

Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

In this paper we propose and investigate a novel nonlinear unit, called ...
05/09/2020

GPU Acceleration of Sparse Neural Networks

In this paper, we use graphics processing units(GPU) to accelerate spars...
12/10/2019

Removable and/or Repeated Units Emerge in Overparametrized Deep Neural Networks

Deep neural networks (DNNs) perform well on a variety of tasks despite t...
04/30/2019

Minimal model of permutation symmetry in unsupervised learning

Permutation of any two hidden units yields invariant properties in typic...