Are You Tampering With My Data?

08/21/2018
by   Michele Alberti, et al.
0

We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.

READ FULL TEXT

page 2

page 6

research
07/21/2023

Fast Adaptive Test-Time Defense with Robust Features

Adaptive test-time defenses are used to improve the robustness of deep n...
research
05/16/2019

Fooling Computer Vision into Inferring the Wrong Body Mass Index

Recently it's been shown that neural networks can use images of human fa...
research
11/16/2021

An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

Together with impressive advances touching every aspect of our society, ...
research
11/20/2020

Detecting Universal Trigger's Adversarial Attack with Honeypot

The Universal Trigger (UniTrigger) is a recently-proposed powerful adver...
research
12/10/2021

Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks

Deep neural networks have become the driving force of modern image recog...
research
03/18/2021

Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles

As neural networks become the tool of choice to solve an increasing vari...
research
11/04/2021

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

Despite the widespread use of Knowledge Graph Embeddings (KGE), little i...

Please sign up or login with your details

Forgot password? Click here to reset