InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

03/08/2023
by   Ziheng Qin, et al.
0

Data pruning aims to obtain lossless performances as training on the original data with less overall cost. A common approach is to simply filter out samples that make less contribution to the training. This leads to gradient expectation bias between the pruned and original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve lossless training acceleration by unbiased dynamic data pruning. Specifically, InfoBatch randomly prunes a portion of less informative samples based on the loss distribution and rescales the gradients of the remaining samples. We train the full data in the last few epochs to improve the performance of our method, which further reduces the bias of the total update. As a plug-and-play and architecture-agnostic framework, InfoBatch consistently obtains lossless training results on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K saving 40%, 33%, 30%, and 26% overall cost, respectively. We extend InfoBatch into semantic segmentation task and also achieve lossless mIoU on ADE20K dataset with 20% overall cost saving. Last but not least, as InfoBatch accelerates in data dimension, it further speeds up large-batch training methods (eg. LARS and LAMB) by 1.3 times without extra cost or performance drop. The code will be made public.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2021

Efficient Lottery Ticket Finding: Less Data is More

The lottery ticket hypothesis (LTH) reveals the existence of winning tic...
research
11/24/2021

Accelerating Deep Learning with Dynamic Data Pruning

Deep learning's success has been attributed to the training of large, ov...
research
01/24/2019

Really should we pruning after model be totally trained? Pruning based on a small amount of training

Pre-training of models in pruning algorithms plays an important role in ...
research
07/16/2020

Multi-Task Pruning for Semantic Segmentation Networks

This paper focuses on channel pruning for semantic segmentation networks...
research
08/13/2018

Fast, Better Training Trick -- Random Gradient

In this paper, we will show an unprecedented method to accelerate traini...
research
10/15/2021

Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction

Dynamic model pruning is a recent direction that allows for the inferenc...

Please sign up or login with your details

Forgot password? Click here to reset