Disentangling Redundancy for Multi-Task Pruning

05/23/2019
by   Xiaoxi He, et al.
0

Can prior network pruning strategies eliminate redundancy in multiple correlated pre-trained deep neural networks? It seems a positive answer if multiple networks are first combined and then pruned. However, we argue that an arbitrarily combined network may lead to sub-optimal pruning performance because their intra- and inter-redundancy may not be minimised at the same time while retaining the inference accuracy in each task. In this paper, we define and analyse the redundancy in multi-task networks from an information theoretic perspective, and identify challenges for existing pruning methods to function effectively for multi-task pruning. We propose Redundancy-Disentangled Networks (RDNets), which decouples intra- and inter-redundancy such that all redundancy can be suppressed via previous network pruning schemes. A pruned RDNet also ensures minimal computation in any subset of tasks, a desirable feature for selective task execution. Moreover, a heuristic is devised to construct an RDNet from multiple pre-trained networks. Experiments on CelebA show that the same pruning method on an RDNet achieves at least 1:8x lower memory usage and 1:4x lower computation cost than on a multi-task network constructed by the state-of-the-art network merging scheme.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2018

Multi-Task Zipping via Layer-wise Neuron Sharing

Future mobile devices are anticipated to perceive, understand and react ...
research
07/16/2020

Multi-Task Pruning for Semantic Segmentation Networks

This paper focuses on channel pruning for semantic segmentation networks...
research
04/13/2023

Structured Pruning for Multi-Task Deep Neural Networks

Although multi-task deep neural network (DNN) models have computation an...
research
05/27/2019

CGaP: Continuous Growth and Pruning for Efficient Deep Learning

Today a canonical approach to reduce the computation cost of Deep Neural...
research
04/11/2022

MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning

Recent years have seen a paradigm shift towards multi-task learning. Thi...
research
03/28/2023

An EMO Joint Pruning with Multiple Sub-networks: Fast and Effect

The network pruning algorithm based on evolutionary multi-objective (EMO...
research
08/14/2020

AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency

Convolutional Neural Networks (CNNs) achieved great cognitive performanc...

Please sign up or login with your details

Forgot password? Click here to reset