Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations

11/30/2022
by   Yu-Xuan Zhang, et al.
0

Multi-instance learning (MIL) is a great paradigm for dealing with complex data and has achieved impressive achievements in a number of fields, including image classification, video anomaly detection, and far more. Each data sample is referred to as a bag containing several unlabeled instances, and the supervised information is only provided at the bag-level. The safety of MIL learners is concerning, though, as we can greatly fool them by introducing a few adversarial perturbations. This can be fatal in some cases, such as when users are unable to access desired images and criminals are attempting to trick surveillance cameras. In this paper, we design two adversarial perturbations to interpret the vulnerability of MIL methods. The first method can efficiently generate the bag-specific perturbation (called customized) with the aim of outsiding it from its original classification region. The second method builds on the first one by investigating the image-agnostic perturbation (called universal) that aims to affect all bags in a given data set and obtains some generalizability. We conduct various experiments to verify the performance of these two perturbations, and the results show that both of them can effectively fool MIL learners. We additionally propose a simple strategy to lessen the effects of adversarial perturbations. Source codes are available at https://github.com/InkiInki/MI-UAP.

READ FULL TEXT

page 2

page 7

page 9

research
07/18/2017

Fast Feature Fool: A data independent approach to universal adversarial perturbations

State-of-the-art object recognition Convolutional Neural Networks (CNNs)...
research
12/01/2019

A Method for Computing Class-wise Universal Adversarial Perturbations

We present an algorithm for computing class-specific universal adversari...
research
04/07/2022

Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines

Adversarial perturbations have drawn great attentions in various deep ne...
research
09/11/2017

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has ...
research
06/27/2023

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

Detecting adversarial samples that are carefully crafted to fool the mod...
research
07/02/2018

Adversarial Perturbations Against Real-Time Video Classification Systems

Recent research has demonstrated the brittleness of machine learning sys...
research
06/14/2022

Adversarial Vulnerability of Randomized Ensembles

Despite the tremendous success of deep neural networks across various ta...

Please sign up or login with your details

Forgot password? Click here to reset