Fast, Better Training Trick -- Random Gradient

08/13/2018
by   Jiakai Wei, et al.
0

In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this method to produce a better result in Pascal VOC, Cifar, Cityscapes datasets.

READ FULL TEXT

page 12

page 13

research
02/01/2018

Learning Semantic Segmentation with Diverse Supervision

Models based on deep convolutional neural networks (CNN) have significan...
research
12/14/2020

Scaling Semantic Segmentation Beyond 1K Classes on a Single GPU

The state-of-the-art object detection and image classification methods c...
research
07/13/2017

Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting

In this work, we show that saturating output activation functions, such ...
research
12/17/2018

Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation

Importance of visual context in scene understanding tasks is well recogn...
research
03/08/2023

InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

Data pruning aims to obtain lossless performances as training on the ori...
research
05/14/2019

Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images

Recent progress of deep image classification models has provided great p...
research
05/29/2019

Extra-gradient with player sampling for provable fast convergence in n-player games

Data-driven model training is increasingly relying on finding Nash equil...

Please sign up or login with your details

Forgot password? Click here to reset