Hyper-parameter Tuning for Adversarially Robust Models

04/05/2023
by   Pedro Mendes, et al.
0

This work focuses on the problem of hyper-parameter tuning (HPT) for robust (i.e., adversarially trained) models, with the twofold goal of i) establishing which additional HPs are relevant to tune in adversarial settings, and ii) reducing the cost of HPT for robust models. We pursue the first goal via an extensive experimental study based on 3 recent models widely adopted in the prior literature on adversarial robustness. Our findings show that the complexity of the HPT problem, already notoriously expensive, is exacerbated in adversarial settings due to two main reasons: i) the need of tuning additional HPs which balance standard and adversarial training; ii) the need of tuning the HPs of the standard and adversarial training phases independently. Fortunately, we also identify new opportunities to reduce the cost of HPT for robust models. Specifically, we propose to leverage cheap adversarial training methods to obtain inexpensive, yet highly correlated, estimations of the quality achievable using state-of-the-art methods (PGD). We show that, by exploiting this novel idea in conjunction with a recent multi-fidelity optimizer (taKG), the efficiency of the HPT process can be significantly enhanced.

READ FULL TEXT
research
05/09/2019

Exploring the Hyperparameter Landscape of Adversarial Robustness

Adversarial training shows promise as an approach for training models th...
research
11/14/2022

Efficient Adversarial Training with Robust Early-Bird Tickets

Adversarial training is one of the most powerful methods to improve the ...
research
10/01/2018

Taming VAEs

In spite of remarkable progress in deep latent variable generative model...
research
10/22/2020

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

Adversarial training and its many variants substantially improve deep ne...
research
12/25/2020

A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

Adversarial Training (AT) with Projected Gradient Descent (PGD) is an ef...
research
06/25/2020

Smooth Adversarial Training

It is commonly believed that networks cannot be both accurate and robust...
research
10/01/2020

Bag of Tricks for Adversarial Training

Adversarial training (AT) is one of the most effective strategies for pr...

Please sign up or login with your details

Forgot password? Click here to reset