Network Augmentation for Tiny Deep Learning

10/17/2021
by   Han Cai, et al.
0

We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks. Existing regularization techniques (e.g., data augmentation, dropout) have shown much success on large neural networks (e.g., ResNet50) by adding noise to overcome over-fitting. However, we found these techniques hurt the performance of tiny neural networks. We argue that training tiny models are different from large models: rather than augmenting the data, we should augment the model, since tiny models tend to suffer from under-fitting rather than over-fitting due to limited capacity. To alleviate this issue, NetAug augments the network (reverse dropout) instead of inserting noise into the dataset or the network. It puts the tiny model into larger models and encourages it to work as a sub-model of larger models to get extra supervision, in addition to functioning as an independent model. At test time, only the tiny model is used for inference, incurring zero inference overhead. We demonstrate the effectiveness of NetAug on image classification and object detection. NetAug consistently improves the performance of tiny models, achieving up to 2.1 ImageNet, and 4.3 improvement with the same computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2021

Survey: Image Mixing and Deleting for Data Augmentation

Data augmentation has been widely used to improve deep nerual networks p...
research
06/29/2015

Dropout as data augmentation

Dropout is typically interpreted as bagging a large number of models sha...
research
02/06/2015

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

Rectified activation units (rectifiers) are essential for state-of-the-a...
research
06/13/2022

2nd Place Solution for ICCV 2021 VIPriors Image Classification Challenge: An Attract-and-Repulse Learning Approach

Convolutional neural networks (CNNs) have achieved significant success i...
research
09/30/2019

RandAugment: Practical data augmentation with no separate search

Recent work has shown that data augmentation has the potential to signif...
research
04/08/2021

InAugment: Improving Classifiers via Internal Augmentation

Image augmentation techniques apply transformation functions such as rot...
research
02/22/2020

Stochasticity in Neural ODEs: An Empirical Study

Stochastic regularization of neural networks (e.g. dropout) is a wide-sp...

Please sign up or login with your details

Forgot password? Click here to reset