Adversarial Reprogramming Revisited

06/07/2022
by   Matthias Englert, et al.
0

Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose a neural network to perform a different task, by manipulating its input without modifying its weights. We prove that two-layer ReLU neural networks with random weights can be adversarially reprogrammed to achieve arbitrarily high accuracy on Bernoulli data models over hypercube vertices, provided the network width is no greater than its input dimension. We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally separable datasets can cause their adversarial reprogramming to fail. We support these theoretical results by experiments that demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. This is in contrast to observations in several recent works that suggested that adversarial reprogramming is not possible for untrained networks to any degree of reliability.

READ FULL TEXT
research
10/28/2020

Most ReLU Networks Suffer from ℓ^2 Adversarial Perturbations

We consider ReLU networks with random weights, in which the dimension de...
research
06/14/2020

Global Convergence of Sobolev Training for Overparametrized Neural Networks

Sobolev loss is used when training a network to approximate the values a...
research
03/31/2022

Adversarial Examples in Random Neural Networks with General Activations

A substantial body of empirical work documents the lack of robustness in...
research
12/19/2016

On Random Weights for Texture Generation in One Layer Neural Networks

Recent work in the literature has shown experimentally that one can use ...
research
04/08/2021

A single gradient step finds adversarial examples on random two-layers neural networks

Daniely and Schacham recently showed that gradient descent finds adversa...
research
07/31/2021

The Separation Capacity of Random Neural Networks

Neural networks with random weights appear in a variety of machine learn...
research
11/03/2021

Regularization by Misclassification in ReLU Neural Networks

We study the implicit bias of ReLU neural networks trained by a variant ...

Please sign up or login with your details

Forgot password? Click here to reset