Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach

08/31/2020
by   Malena Reiners, et al.
0

Overparameterization and overfitting are common concerns when designing and training deep neural networks, that are often counteracted by pruning and regularization strategies. However, these strategies remain secondary to most learning approaches and suffer from time and computational intensive procedures. We suggest a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions in a biobjective optimization problem. As a showcase example, we use the cross entropy as a measure of the prediction accuracy while adopting an l1-penalty function to assess the total cost (or complexity) of the network parameters. The latter is combined with an intra-training pruning approach that reinforces complexity reduction and requires only marginal extra computational cost. From the perspective of multiobjective optimization, this is a truly large-scale optimization problem. We compare two different optimization paradigms: On the one hand, we adopt a scalarization-based approach that transforms the biobjective problem into a series of weighted-sum scalarizations. On the other hand we implement stochastic multi-gradient descent algorithms that generate a single Pareto optimal solution without requiring or using preference information. In the first case, favorable knee solutions are identified by repeated training runs with adaptively selected scalarization parameters. Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2021

Pruning with Compensation: Efficient Channel Pruning for Deep Convolutional Neural Networks

Channel pruning is a promising technique to compress the parameters of d...
research
03/05/2020

Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks

Modern deep networks have millions to billions of parameters, which lead...
research
02/17/2022

Training neural networks using monotone variational inequality

Despite the vast empirical success of neural networks, theoretical under...
research
06/08/2015

Adaptive Normalized Risk-Averting Training For Deep Neural Networks

This paper proposes a set of new error criteria and learning approaches,...
research
06/22/2020

Revisiting Loss Modelling for Unstructured Pruning

By removing parameters from deep neural networks, unstructured pruning m...
research
02/20/2023

Multiobjective Evolutionary Pruning of Deep Neural Networks with Transfer Learning for improving their Performance and Robustness

Evolutionary Computation algorithms have been used to solve optimization...
research
02/25/2017

Adaptive Neural Networks for Efficient Inference

We present an approach to adaptively utilize deep neural networks in ord...

Please sign up or login with your details

Forgot password? Click here to reset