Adversarial Initialization -- when your network performs the way I want

02/08/2019 ∙ by Kathrin Grosse, et al. ∙ 0

The increase in computational power and available data has fueled a wide deployment of deep learning in production environments. Despite their successes, deep architectures are still poorly understood and costly to train. We demonstrate in this paper how a simple recipe enables a market player to harm or delay the development of a competing product. Such a threat model is novel and has not been considered so far. We derive the corresponding attacks and show their efficacy both formally and empirically. These attacks only require access to the initial, untrained weights of a network. No knowledge of the problem domain and the data used by the victim is needed. On the initial weights, a mere permutation is sufficient to limit the achieved accuracy to for example 50 can show straightforward ways to mitigate the attacks, the respective steps are not part of the standard procedure taken by developers so far.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.