Measuring the Transferability of Adversarial Examples

07/14/2019
by   Deyan Petrov, et al.
0

Adversarial examples are of wide concern due to their impact on the reliability of contemporary machine learning systems. Effective adversarial examples are mostly found via white-box attacks. However, in some cases they can be transferred across models, thus enabling them to attack black-box models. In this work we evaluate the transferability of three adversarial attacks - the Fast Gradient Sign Method, the Basic Iterative Method, and the Carlini & Wagner method, across two classes of models - the VGG class(using VGG16, VGG19 and an ensemble of VGG16 and VGG19), and the Inception class(Inception V3, Xception, Inception Resnet V2, and an ensemble of the three). We also outline the problems with the assessment of transferability in the current body of research and attempt to amend them by picking specific "strong" parameters for the attacks, and by using a L-Infinity clipping technique and the SSIM metric for the final evaluation of the attack transferability.

READ FULL TEXT

page 7

page 9

page 11

page 12

page 13

research
07/08/2020

Making Adversarial Examples More Transferable and Indistinguishable

Many previous methods generate adversarial examples based on the fast gr...
research
03/17/2020

Adversarial Transferability in Wearable Sensor Systems

Machine learning has increasingly become the most used approach for infe...
research
12/03/2021

Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

Transferability of adversarial samples became a serious concern due to t...
research
04/18/2023

Towards the Transferable Audio Adversarial Attack via Ensemble Methods

In recent years, deep learning (DL) models have achieved significant pro...
research
12/05/2018

Regularized Ensembles and Transferability in Adversarial Learning

Despite the considerable success of convolutional neural networks in a b...
research
05/02/2021

Who's Afraid of Adversarial Transferability?

Adversarial transferability, namely the ability of adversarial perturbat...
research
11/23/2022

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

This paper examines the robustness of deployed few-shot meta-learning sy...

Please sign up or login with your details

Forgot password? Click here to reset