Speeding up Convolutional Neural Networks with Low Rank Expansions

05/15/2014
by   Max Jaderberg, et al.
0

The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 still achieving state-of-the-art on standard benchmarks.

READ FULL TEXT
research
06/11/2020

Convolutional neural networks compression with low rank and sparse tensor decompositions

Convolutional neural networks show outstanding results in a variety of c...
research
04/02/2014

Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation

We present techniques for speeding up the test-time evaluation of large ...
research
10/31/2018

Low-Rank Embedding of Kernels in Convolutional Neural Networks under Random Shuffling

Although the convolutional neural networks (CNNs) have become popular fo...
research
03/28/2017

Coordinating Filters for Faster Deep Neural Networks

Very large-scale Deep Neural Networks (DNNs) have achieved remarkable su...
research
12/17/2014

Flattened Convolutional Neural Networks for Feedforward Acceleration

We present flattened convolutional neural networks that are designed for...
research
02/07/2020

Revisiting Spatial Invariance with Low-Rank Local Connectivity

Convolutional neural networks are among the most successful architecture...
research
05/30/2023

Rank-adaptive spectral pruning of convolutional layers during training

The computing cost and memory demand of deep learning pipelines have gro...

Please sign up or login with your details

Forgot password? Click here to reset