Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference

08/26/2021
by   Yang Zheng, et al.
20

Adversarial reprogramming allows repurposing a machine-learning model to perform a different task. For example, a model trained to recognize animals can be reprogrammed to recognize digits by embedding an adversarial program in the digit images provided as input. Recent work has shown that adversarial reprogramming may not only be used to abuse machine-learning models provided as a service, but also beneficially, to improve transfer learning when training data is scarce. However, the factors affecting its success are still largely unexplained. In this work, we develop a first-order linear model of adversarial reprogramming to show that its success inherently depends on the size of the average input gradient, which grows when input gradients are more aligned, and when inputs have higher dimensionality. The results of our experimental analysis, involving fourteen distinct reprogramming tasks, show that the above factors are correlated with the success and the failure of adversarial reprogramming.

READ FULL TEXT

page 2

page 4

page 6

page 7

page 8

page 11

page 15

research
11/05/2022

Stateful Detection of Adversarial Reprogramming

Adversarial reprogramming allows stealing computational resources by rep...
research
03/24/2023

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...
research
06/14/2021

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

Backdoor attacks inject poisoning samples during training, with the goal...
research
03/14/2019

Attribution-driven Causal Analysis for Detection of Adversarial Examples

Attribution methods have been developed to explain the decision of a mac...
research
02/10/2022

Transfer-Learning Across Datasets with Different Input Dimensions: An Algorithm and Analysis for the Linear Regression Case

With the development of new sensors and monitoring devices, more sources...
research
01/29/2020

Just Noticeable Difference for Machines to Generate Adversarial Images

One way of designing a robust machine learning algorithm is to generate ...
research
08/29/2019

Defending Against Misclassification Attacks in Transfer Learning

Transfer learning accelerates the development of new models (Student Mod...

Please sign up or login with your details

Forgot password? Click here to reset