Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups

05/20/2016 ∙ by Yani Ioannou, et al. ∙ 0

We propose a new method for creating computationally efficient and compact convolutional neural networks (CNNs) using a novel sparse connection structure that resembles a tree root. This allows a significant reduction in computational cost and number of parameters compared to state-of-the-art deep CNNs, without compromising accuracy, by exploiting the sparsity of inter-layer filter dependencies. We validate our approach by using it to train more efficient variants of state-of-the-art CNN architectures, evaluated on the CIFAR10 and ILSVRC datasets. Our results show similar or higher accuracy than the baseline architectures with much less computation, as measured by CPU and GPU timings. For example, for ResNet 50, our model has 40 45 For the deeper ResNet 200 our model has 25 44 GoogLeNet, our model has 7 (GPU).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 11

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.