A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks

05/29/2022
by   Binyan Hu, et al.
20

Deep neural networks (DNNs) often rely on massive labelled data for training, which is inaccessible in many applications. Data augmentation (DA) tackles data scarcity by creating new labelled data from available ones. Different DA methods have different mechanisms and therefore using their generated labelled data for DNN training may help improving DNN's generalisation to different degrees. Combining multiple DA methods, namely multi-DA, for DNN training, provides a way to boost generalisation. Among existing multi-DA based DNN training methods, those relying on knowledge distillation (KD) have received great attention. They leverage knowledge transfer to utilise the labelled data sets created by multiple DA methods instead of directly combining them for training DNNs. However, existing KD-based methods can only utilise certain types of DA methods, incapable of utilising the advantages of arbitrary DA methods. We propose a general multi-DA based DNN training framework capable to use arbitrary DA methods. To train a DNN, our framework replicates a certain portion in the latter part of the DNN into multiple copies, leading to multiple DNNs with shared blocks in their former parts and independent blocks in their latter parts. Each of these DNNs is associated with a unique DA and a newly devised loss that allows comprehensively learning from the data generated by all DA methods and the outputs from all DNNs in an online and adaptive way. The overall loss, i.e., the sum of each DNN's loss, is used for training the DNN. Eventually, one of the DNNs with the best validation performance is chosen for inference. We implement the proposed framework by using three distinct DA methods and apply it for training representative DNNs. Experiments on the popular benchmarks of image classification demonstrate the superiority of our method to several existing single-DA and multi-DA based training methods.

READ FULL TEXT

page 1

page 3

research
03/17/2022

When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation

Data Augmentation (DA) is known to improve the generalizability of deep ...
research
07/14/2022

Universal Adaptive Data Augmentation

Existing automatic data augmentation (DA) methods either ignore updating...
research
12/05/2020

Knowledge Distillation Thrives on Data Augmentation

Knowledge distillation (KD) is a general deep neural network training fr...
research
07/02/2020

A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization

Conventional DNN training paradigms typically rely on one training set a...
research
03/12/2017

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Recently, DNN model compression based on network architecture design, e....
research
03/20/2021

Patch AutoAugment

Data augmentation (DA) plays a critical role in training deep neural net...
research
02/23/2023

Practical Knowledge Distillation: Using DNNs to Beat DNNs

For tabular data sets, we explore data and model distillation, as well a...

Please sign up or login with your details

Forgot password? Click here to reset