DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks

02/28/2023
by   Samyak Jain, et al.
0

Generalization of neural networks is crucial for deploying them safely in the real world. Common training strategies to improve generalization involve the use of data augmentations, ensembling and model averaging. In this work, we first establish a surprisingly simple but strong benchmark for generalization which utilizes diverse augmentations within a training minibatch, and show that this can learn a more balanced distribution of features. Further, we propose Diversify-Aggregate-Repeat Training (DART) strategy that first trains diverse models using different augmentations (or domains) to explore the loss basin, and further Aggregates their weights to combine their expertise and obtain improved generalization. We find that Repeating the step of Aggregation throughout training improves the overall optimization trajectory and also ensures that the individual models have a sufficiently low loss barrier to obtain improved generalization on combining them. We shed light on our approach by casting it in the framework proposed by Shen et al. and theoretically show that it indeed generalizes better. In addition to improvements in In- Domain generalization, we demonstrate SOTA performance on the Domain Generalization benchmarks in the popular DomainBed framework as well. Our method is generic and can easily be integrated with several base training algorithms to achieve performance gains.

READ FULL TEXT
research
02/17/2021

Domain Generalization Needs Stochastic Weight Averaging for Robustness on Domain Shifts

Domain generalization aims to learn a generalizable model to unseen targ...
research
05/25/2023

Meta Adaptive Task Sampling for Few-Domain Generalization

To ensure the out-of-distribution (OOD) generalization performance, trad...
research
05/19/2022

Diverse Weight Averaging for Out-of-Distribution Generalization

Standard neural networks struggle to generalize under distribution shift...
research
02/07/2021

Domain Adversarial Neural Networks for Domain Generalization: When It Works and How to Improve

Theoretically, domain adaptation is a well-researched problem. Further, ...
research
05/13/2018

Are All Experts Equally Good? A Study of Analyst Earnings Estimates

We investigate whether experts possess differential expertise when makin...
research
05/26/2023

Domain Aligned Prefix Averaging for Domain Generalization in Abstractive Summarization

Domain generalization is hitherto an underexplored area applied in abstr...
research
09/02/2023

On the training and generalization of deep operator networks

We present a novel training method for deep operator networks (DeepONets...

Please sign up or login with your details

Forgot password? Click here to reset