End-to-End Supermask Pruning: Learning to Prune Image Captioning Models

10/07/2021
by   Jia Huei Tan, et al.
2

With the advancement of deep models, research work on image captioning has led to a remarkable gain in raw performance over the last decade, along with increasing model complexity and computational cost. However, surprisingly works on compression of deep networks for image captioning task has received little to no attention. For the first time in image captioning research, we provide an extensive comparison of various unstructured weight pruning methods on three different popular image captioning architectures, namely Soft-Attention, Up-Down and Object Relation Transformer. Following this, we propose a novel end-to-end weight pruning method that performs gradual sparsification based on weight sensitivity to the training loss. The pruning schemes are then extended with encoder pruning, where we show that conducting both decoder pruning and training simultaneously prior to the encoder pruning provides good overall performance. Empirically, we show that an 80 reduction in model size) can either match or outperform its dense counterpart. The code and pre-trained models for Up-Down and Object Relation Transformer that are capable of achieving CIDEr scores >120 on the MS-COCO dataset but with only 8.7 MB and 14.5 MB in model size (size reduction of 96 respectively against dense versions) are publicly available at https://github.com/jiahuei/sparse-image-captioning.

READ FULL TEXT
research
02/11/2022

ACORT: A Compact Object Relation Transformer for Parameter Efficient Image Captioning

Recent research that applies Transformer-based architectures to image ca...
research
06/14/2019

Image Captioning: Transforming Objects into Words

Image captioning models typically follow an encoder-decoder architecture...
research
03/04/2019

COMIC: Towards A Compact Image Captioning Model with Attention

Recent works in image captioning have shown very promising raw performan...
research
04/15/2022

Image Captioning In the Transformer Age

Image Captioning (IC) has achieved astonishing developments by incorpora...
research
12/17/2020

Efficient CNN-LSTM based Image Captioning using Neural Network Compression

Modern Neural Networks are eminent in achieving state of the art perform...
research
08/28/2019

Image Captioning with Sparse Recurrent Neural Network

Recurrent Neural Network (RNN) has been deployed as the de facto model t...
research
08/23/2023

With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning

Image captioning, like many tasks involving vision and language, current...

Please sign up or login with your details

Forgot password? Click here to reset