Siamese networks for generating adversarial examples

05/03/2018
by   Mandar Kulkarni, et al.
0

Machine learning models are vulnerable to adversarial examples. An adversary modifies the input data such that humans still assign the same label, however, machine learning models misclassify it. Previous approaches in the literature demonstrated that adversarial examples can even be generated for the remotely hosted model. In this paper, we propose a Siamese network based approach to generate adversarial examples for a multiclass target CNN. We assume that the adversary do not possess any knowledge of the target data distribution, and we use an unlabeled mismatched dataset to query the target, e.g., for the ResNet-50 target, we use the Food-101 dataset as the query. Initially, the target model assigns labels to the query dataset, and a Siamese network is trained on the image pairs derived from these multiclass labels. We learn the adversarial perturbations for the Siamese model and show that these perturbations are also adversarial w.r.t. the target model. In experimental results, we demonstrate effectiveness of our approach on MNIST, CIFAR-10 and ImageNet targets with TinyImageNet/Food-101 query datasets.

READ FULL TEXT

page 1

page 2

page 4

research
10/14/2016

Are Accuracy and Robustness Correlated?

Machine learning models are vulnerable to adversarial examples formed by...
research
10/07/2020

Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples

Recent work on adversarial learning has focused mainly on neural network...
research
11/13/2018

Deep Q learning for fooling neural networks

Deep learning models are vulnerable to external attacks. In this paper, ...
research
08/02/2019

AdvGAN++ : Harnessing latent layers for adversary generation

Adversarial examples are fabricated examples, indistinguishable from the...
research
04/21/2020

Have you forgotten? A method to assess if machine learning models have forgotten data

In the era of deep learning, aggregation of data from several sources is...
research
04/23/2018

Siamese Generative Adversarial Privatizer for Biometric Data

State-of-the-art machine learning algorithms can be fooled by carefully ...
research
04/07/2020

Universal Adversarial Perturbations Generative Network for Speaker Recognition

Attacking deep learning based biometric systems has drawn more and more ...

Please sign up or login with your details

Forgot password? Click here to reset