Re-Weighted Learning for Sparsifying Deep Neural Networks

02/05/2018
by   Igor Fedorov, et al.
0

This paper addresses the topic of sparsifying deep neural networks (DNN's). While DNN's are powerful models that achieve state-of-the-art performance on a large number of tasks, the large number of model parameters poses serious storage and computational challenges. To combat these difficulties, a growing line of work focuses on pruning network weights without sacrificing performance. We propose a general affine scaling transformation (AST) algorithm to sparsify DNN's. Our approach follows in the footsteps of popular sparse recovery techniques, which have yet to be explored in the context of DNN's. We describe a principled framework for transforming densely connected DNN's into sparsely connected ones without sacrificing network performance. Unlike existing methods, our approach is able to learn sparse connections at each layer simultaneously, and achieves comparable pruning results on the architecture tested.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset