DeepAI AI Chat
Log In Sign Up

Building Sparse Deep Feedforward Networks using Tree Receptive Fields

by   Xiaopeng Li, et al.

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.


page 1

page 2

page 3

page 4


Learning Sparse Deep Feedforward Networks via Tree Skeleton Expansion

Despite the popularity of deep learning, structure learning for deep mod...

Neural Arithmetic Units

Neural networks can approximate complex functions, but they struggle to ...

Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

In this paper we propose and investigate a novel nonlinear unit, called ...

GPU Acceleration of Sparse Neural Networks

In this paper, we use graphics processing units(GPU) to accelerate spars...

Removable and/or Repeated Units Emerge in Overparametrized Deep Neural Networks

Deep neural networks (DNNs) perform well on a variety of tasks despite t...

Minimal model of permutation symmetry in unsupervised learning

Permutation of any two hidden units yields invariant properties in typic...