A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization

07/02/2020
by   Boyu Zhang, et al.
0

Conventional DNN training paradigms typically rely on one training set and one validation set, obtained by partitioning an annotated dataset used for training, namely gross training set, in a certain way. The training set is used for training the model while the validation set is used to estimate the generalization performance of the trained model as the training proceeds to avoid over-fitting. There exist two major issues in this paradigm. Firstly, the validation set may hardly guarantee an unbiased estimate of generalization performance due to potential mismatching with test data. Secondly, training a DNN corresponds to solve a complex optimization problem, which is prone to getting trapped into inferior local optima and thus leads to undesired training results. To address these issues, we propose a novel DNN training framework. It generates multiple pairs of training and validation sets from the gross training set via random splitting, trains a DNN model of a pre-specified structure on each pair while making the useful knowledge (e.g., promising network parameters) obtained from one model training process to be transferred to other model training processes via multi-task optimization, and outputs the best, among all trained models, which has the overall best performance across the validation sets from all pairs. The knowledge transfer mechanism featured in this new framework can not only enhance training effectiveness by helping the model training process to escape from local optima but also improve on generalization performance via implicit regularization imposed on one model training process from other model training processes. We implement the proposed framework, parallelize the implementation on a GPU cluster, and apply it to train several widely used DNN models. Experimental results demonstrate the superiority of the proposed framework over the conventional training paradigm.

READ FULL TEXT
research
11/18/2021

DIVA: Dataset Derivative of a Learning Task

We present a method to compute the derivative of a learning task with re...
research
10/06/2021

Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep Learning

Deep learning-based Multi-Task Classification (MTC) is widely used in ap...
research
05/29/2022

A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks

Deep neural networks (DNNs) often rely on massive labelled data for trai...
research
12/19/2021

Managing dataset shift by adversarial validation for credit scoring

Dataset shift is common in credit scoring scenarios, and the inconsisten...
research
08/29/2022

Data Isotopes for Data Provenance in DNNs

Today, creators of data-hungry deep neural networks (DNNs) scour the Int...
research
09/04/2022

Data Provenance via Differential Auditing

Auditing Data Provenance (ADP), i.e., auditing if a certain piece of dat...
research
08/14/2023

Aggregating Intrinsic Information to Enhance BCI Performance through Federated Learning

Insufficient data is a long-standing challenge for Brain-Computer Interf...

Please sign up or login with your details

Forgot password? Click here to reset