DHA: End-to-End Joint Optimization of Data Augmentation Policy, Hyper-parameter and Architecture

09/13/2021
by   Kaichen Zhou, et al.
0

Automated machine learning (AutoML) usually involves several crucial components, such as Data Augmentation (DA) policy, Hyper-Parameter Optimization (HPO), and Neural Architecture Search (NAS). Although many strategies have been developed for automating these components in separation, joint optimization of these components remains challenging due to the largely increased search dimension and the variant input types of each component. Meanwhile, conducting these components in a sequence often requires careful coordination by human experts and may lead to sub-optimal results. In parallel to this, the common practice of searching for the optimal architecture first and then retraining it before deployment in NAS often suffers from low performance correlation between the search and retraining stages. An end-to-end solution that integrates the AutoML components and returns a ready-to-use model at the end of the search is desirable. In view of these, we propose DHA, which achieves joint optimization of Data augmentation policy, Hyper-parameter and Architecture. Specifically, end-to-end NAS is achieved in a differentiable manner by optimizing a compressed lower-dimensional feature space, while DA policy and HPO are updated dynamically at the same time. Experiments show that DHA achieves state-of-the-art (SOTA) results on various datasets, especially 77.4% accuracy on ImageNet with cell based search space, which is higher than current SOTA by 0.5%. To the best of our knowledge, we are the first to efficiently and jointly optimize DA policy, NAS, and HPO in an end-to-end manner without retraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2020

AutoHAS: Differentiable Hyper-parameter and Architecture Search

Neural Architecture Search (NAS) has achieved significant progress in pu...
research
09/30/2021

DAAS: Differentiable Architecture and Augmentation Policy Search

Neural architecture search (NAS) has been an active direction of automat...
research
12/17/2020

Joint Search of Data Augmentation Policies and Network Architectures

The common pipeline of training deep neural networks consists of several...
research
05/20/2020

Rethinking Performance Estimation in Neural Architecture Search

Neural architecture search (NAS) remains a challenging problem, which is...
research
01/23/2023

Efficient Training Under Limited Resources

Training time budget and size of the dataset are among the factors affec...
research
02/21/2020

DSNAS: Direct Neural Architecture Search without Parameter Retraining

If NAS methods are solutions, what is the problem? Most existing NAS met...
research
10/17/2022

Multi-Agent Automated Machine Learning

In this paper, we propose multi-agent automated machine learning (MA2ML)...

Please sign up or login with your details

Forgot password? Click here to reset