Protecting the integrity of the training procedure of neural networks

05/14/2020
by   Christian Berghoff, et al.
0

Due to significant improvements in performance in recent years, neural networks are currently used for an ever-increasing number of applications. However, neural networks have the drawback that their decisions are not readily interpretable and traceable for a human. This creates several problems, for instance in terms of safety and IT security for high-risk applications, where assuring these properties is crucial. One of the most striking IT security problems aggravated by the opacity of neural networks is the possibility of so-called poisoning attacks during the training phase, where an attacker inserts specially crafted data to manipulate the resulting model. We propose an approach to this problem which allows provably verifying the integrity of the training procedure by making use of standard cryptographic mechanisms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset