Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem

02/18/2022
by   Abderrahmen Amich, et al.
0

Despite multiple efforts made towards robust machine learning (ML) models, their vulnerability to adversarial examples remains a challenging problem that calls for rethinking the defense strategy. In this paper, we take a step back and investigate the causes behind ML models' susceptibility to adversarial examples. In particular, we focus on exploring the cause-effect link between adversarial examples and the out-of-distribution (OOD) problem. To that end, we propose an OOD generalization method that stands against both adversary-induced and natural distribution shifts. Through an OOD to in-distribution mapping intuition, our approach translates OOD inputs to the data distribution used to train and test the model. Through extensive experiments on three benchmark image datasets of different scales (MNIST, CIFAR10, and ImageNet) and by leveraging image-to-image translation methods, we confirm that the adversarial examples problem is a special case of the wider OOD generalization problem. Across all datasets, we show that our translation-based approach consistently improves robustness to OOD adversarial inputs and outperforms state-of-the-art defenses by a significant margin, while preserving the exact accuracy on benign (in-distribution) data. Furthermore, our method generalizes on naturally OOD inputs such as darker or sharper images

READ FULL TEXT
research
02/21/2017

On the (Statistical) Detection of Adversarial Examples

Machine Learning (ML) models are applied in a variety of tasks such as n...
research
05/05/2019

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

A large body of recent work has investigated the phenomenon of evasion a...
research
12/19/2019

n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

This paper proposes a new defense called n-ML against adversarial exampl...
research
02/08/2023

Shortcut Detection with Variational Autoencoders

For real-world applications of machine learning (ML), it is essential th...
research
10/30/2020

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

To this date, CAPTCHAs have served as the first line of defense preventi...
research
10/23/2018

One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy

Despite the great success achieved in machine learning (ML), adversarial...
research
08/29/2023

3D Adversarial Augmentations for Robust Out-of-Domain Predictions

Since real-world training datasets cannot properly sample the long tail ...

Please sign up or login with your details

Forgot password? Click here to reset