n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

12/19/2019
by   Mahmood Sharif, et al.
17

This paper proposes a new defense called n-ML against adversarial examples, i.e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers. Inspired by n-version programming, n-ML trains an ensemble of n classifiers, and inputs are classified by a vote of the classifiers in the ensemble. Unlike prior such approaches, however, the classifiers in the ensemble are trained specifically to classify adversarial examples differently, rendering it very difficult for an adversarial example to obtain enough votes to be misclassified. We show that n-ML roughly retains the benign classification accuracies of state-of-the-art models on the MNIST, CIFAR10, and GTSRB datasets, while simultaneously defending against adversarial examples with better resilience than the best defenses known to date and, in most cases, with lower classification-time overhead.

READ FULL TEXT
research
02/21/2017

On the (Statistical) Detection of Adversarial Examples

Machine Learning (ML) models are applied in a variety of tasks such as n...
research
06/11/2022

Defending Adversarial Examples by Negative Correlation Ensemble

The security issues in DNNs, such as adversarial examples, have attracte...
research
09/26/2019

Adversarial Machine Learning Attack on Modulation Classification

Modulation classification is an important component of cognitive self-dr...
research
10/30/2020

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

To this date, CAPTCHAs have served as the first line of defense preventi...
research
07/14/2020

Adversarial Examples and Metrics

Adversarial examples are a type of attack on machine learning (ML) syste...
research
02/18/2022

Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem

Despite multiple efforts made towards robust machine learning (ML) model...
research
08/05/2021

BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples

The design of additive imperceptible perturbations to the inputs of deep...

Please sign up or login with your details

Forgot password? Click here to reset