T_kML-AP: Adversarial Attacks to Top-k Multi-Label Learning

07/31/2021
by   Shu Hu, et al.
7

Top-k multi-label learning, which returns the top-k predicted labels from an input, has many practical applications such as image annotation, document analysis, and web search engine. However, the vulnerabilities of such algorithms with regards to dedicated adversarial perturbation attacks have not been extensively studied previously. In this work, we develop methods to create adversarial perturbations that can be used to attack top-k multi-label learning-based image annotation systems (TkML-AP). Our methods explicitly consider the top-k ranking relation and are based on novel loss functions. Experimental evaluations on large-scale benchmark datasets including PASCAL VOC and MS COCO demonstrate the effectiveness of our methods in reducing the performance of state-of-the-art top-k multi-label learning methods, under both untargeted and targeted attacks.

READ FULL TEXT

page 1

page 7

page 8

page 11

page 12

page 13

research
01/02/2019

Multi-Label Adversarial Perturbations

Adversarial examples are delicately perturbed inputs, which aim to misle...
research
07/11/2022

Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency

Many real-world applications of image recognition require multi-label le...
research
10/03/2022

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

Multi-label classification, which predicts a set of labels for an input,...
research
04/08/2019

Multi-View Matrix Completion for Multi-Label Image Classification

There is growing interest in multi-label image classification due to its...
research
09/18/2017

Leveraging Distributional Semantics for Multi-Label Learning

We present a novel and scalable label embedding framework for large-scal...
research
06/29/2021

Attack Transferability Characterization for Adversarially Robust Multi-label Classification

Despite of the pervasive existence of multi-label evasion attack, it is ...
research
07/20/2022

MLMSA: Multi-Label Multi-Side-Channel-Information enabled Deep Learning Attacks on APUF Variants

To improve the modeling resilience of silicon strong physical unclonable...

Please sign up or login with your details

Forgot password? Click here to reset