DeepAI AI Chat
Log In Sign Up

Sitatapatra: Blocking the Transfer of Adversarial Samples

01/23/2019
by   Ilia Shumailov, et al.
0

Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision. However, they can be tricked into misclassifying specially crafted `adversarial' samples -- and samples built to trick one model often work alarmingly well against other models trained on the same task. In this paper we introduce Sitatapatra, a system designed to block the transfer of adversarial samples. It diversifies neural networks using a key, as in cryptography, and provides a mechanism for detecting attacks. What's more, when adversarial samples are detected they can typically be traced back to the individual device that was used to develop them. The run-time overheads are minimal permitting the use of Sitatapatra on constrained systems.

READ FULL TEXT
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
04/28/2016

Crafting Adversarial Input Sequences for Recurrent Neural Networks

Machine learning models are frequently used to solve complex security pr...
11/02/2020

Frequency-based Automated Modulation Classification in the Presence of Adversaries

Automatic modulation classification (AMC) aims to improve the efficiency...
02/20/2020

Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more class...
12/26/2017

Building Robust Deep Neural Networks for Road Sign Detection

Deep Neural Networks are built to generalize outside of training set in ...
07/13/2020

On the importance of block randomisation when designing proteomics experiments

Randomisation is used in experimental design to reduce the prevalence of...
06/26/2019

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

The unprecedented success of deep neural networks in various application...