A3T: Accuracy Aware Adversarial Training

11/29/2022
by   Enes Altinisik, et al.
0

Adversarial training has been empirically shown to be more prone to overfitting than standard training. The exact underlying reasons still need to be fully understood. In this paper, we identify one cause of overfitting related to current practices of generating adversarial samples from misclassified samples. To address this, we propose an alternative approach that leverages the misclassified samples to mitigate the overfitting problem. We show that our approach achieves better generalization while having comparable robustness to state-of-the-art adversarial training methods on a wide range of computer vision, natural language processing, and tabular tasks.

READ FULL TEXT
research
12/09/2022

Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization

Adversarial training is widely used to improve the robustness of deep ne...
research
10/09/2019

Deep Latent Defence

Deep learning methods have shown state of the art performance in a range...
research
07/25/2019

Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation

Overfitting in deep learning has been the focus of a number of recent wo...
research
02/15/2021

Data Profiling for Adversarial Training: On the Ruin of Problematic Data

Multiple intriguing problems hover in adversarial training, including ro...
research
11/14/2017

Robust Multilingual Part-of-Speech Tagging via Adversarial Training

Adversarial training (AT) is a powerful regularization method for neural...
research
02/03/2020

Regularizers for Single-step Adversarial Training

The progress in the last decade has enabled machine learning models to a...
research
10/15/2020

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Our goal is to understand why the robustness drops after conducting adve...

Please sign up or login with your details

Forgot password? Click here to reset