A Multigrid Method for Efficiently Training Video Models

12/02/2019
by   Chao-Yuan Wu, et al.
7

Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training assumes a fixed mini-batch shape: a specific number of clips, frames, and spatial size. However, what is the optimal shape? High resolution models perform well, but train slowly. Low resolution models train faster, but they are inaccurate. Inspired by multigrid methods in numerical optimization, we propose to use variable mini-batch shapes with different spatial-temporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the mini-batch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pre-training, 128 GPUs or 1 GPU). As an illustrative example, the proposed multigrid method trains a ResNet-50 SlowFast network 4.5x faster (wall-clock time, same hardware) while also improving accuracy (+0.8 Kinetics-400 compared to the baseline training method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2022

Accelerating the Training of Video Super-Resolution Models

Despite that convolution neural networks (CNN) have recently demonstrate...
research
11/20/2017

MegDet: A Large Mini-Batch Object Detector

The improvements in recent CNN-based object detection works, from R-CNN ...
research
11/13/2018

ImageNet/ResNet-50 Training in 224 Seconds

Scaling the distributed deep learning to a massive GPU cluster level is ...
research
10/16/2018

Approximate Fisher Information Matrix to Characterise the Training of Deep Neural Networks

In this paper, we introduce a novel methodology for characterising the p...
research
06/24/2020

Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes

BERT has recently attracted a lot of attention in natural language under...
research
03/21/2021

PGT: A Progressive Method for Training Models on Long Videos

Convolutional video models have an order of magnitude larger computation...
research
12/31/2021

Learned Coarse Models for Efficient Turbulence Simulation

Turbulence simulation with classical numerical solvers requires very hig...

Please sign up or login with your details

Forgot password? Click here to reset