Protection against Cloning for Deep Learning

03/29/2018
by   Richard Kenway, et al.
0

The susceptibility of deep learning to adversarial attack can be understood in the framework of the Renormalisation Group (RG) and the vulnerability of a specific network may be diagnosed provided the weights in each layer are known. An adversary with access to the inputs and outputs could train a second network to clone these weights and, having identified a weakness, use them to compute the perturbation of the input data which exploits it. However, the RG framework also provides a means to poison the outputs of the network imperceptibly, without affecting their legitimate use, so as to prevent such cloning of its weights and thereby foil the generation of adversarial data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2018

Vulnerability of Deep Learning

The Renormalisation Group (RG) provides a framework in which it is possi...
research
09/15/2021

Universal Adversarial Attack on Deep Learning Based Prognostics

Deep learning-based time series models are being extensively utilized in...
research
03/27/2019

Rallying Adversarial Techniques against Deep Learning for Network Security

Recent advances in artificial intelligence and the increasing need for p...
research
02/08/2019

Adversarial Initialization -- when your network performs the way I want

The increase in computational power and available data has fueled a wide...
research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Neural Networks

We explore the problem of selectively forgetting a particular set of dat...
research
06/14/2020

Proximal Mapping for Deep Regularization

Underpinning the success of deep learning is effective regularizations t...
research
07/16/2015

Deep Learning and Music Adversaries

An adversary is essentially an algorithm intent on making a classificati...

Please sign up or login with your details

Forgot password? Click here to reset