Auto-Compressing Subset Pruning for Semantic Image Segmentation

01/26/2022
by   Konstantin Ditschuneit, et al.
0

State-of-the-art semantic segmentation models are characterized by high parameter counts and slow inference times, making them unsuitable for deployment in resource-constrained environments. To address this challenge, we propose Auto-Compressing Subset Pruning, , as a new online compression method. The core of consists of learning a channel selection mechanism for individual channels of each convolution in the segmentation model based on an effective temperature annealing schedule. We show a crucial interplay between providing a high-capacity model at the beginning of training and the compression pressure forcing the model to compress concepts into retained channels. We apply to and architectures and show its success when trained on the , , , and datasets. The results are competitive with existing baselines for compression of segmentation models at low compression ratios and outperform them significantly at high compression ratios, yielding acceptable results even when removing more than 93% of the parameters. In addition, is conceptually simple, easy to implement, and can readily be generalized to other data modalities, tasks, and architectures. Our code is available at <https://github.com/merantix/acosp>.

READ FULL TEXT

page 5

page 16

page 17

research
02/16/2021

Successive Pruning for Model Compression via Rate Distortion Theory

Neural network (NN) compression has become essential to enable deploying...
research
08/08/2023

Lossy and Lossless (L^2) Post-training Model Size Compression

Deep neural networks have delivered remarkable performance and have been...
research
03/21/2021

ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques

Pre-trained language models of the BERT family have defined the state-of...
research
06/09/2022

DiSparse: Disentangled Sparsification for Multitask Model Compression

Despite the popularity of Model Compression and Multitask Learning, how ...
research
05/18/2023

Structural Pruning for Diffusion Models

Generative modeling has recently undergone remarkable advancements, prim...
research
09/05/2023

Compressing Vision Transformers for Low-Resource Visual Learning

Vision transformer (ViT) and its variants have swept through visual lear...
research
07/31/2020

Ultra-light deep MIR by trimming lottery tickets

Current state-of-the-art results in Music Information Retrieval are larg...

Please sign up or login with your details

Forgot password? Click here to reset