DeepAI AI Chat
Log In Sign Up

Adversarial Ranking Attack and Defense

02/26/2020
by   Mo Zhou, et al.
Xi'an Jiaotong University
0

Deep Neural Network (DNN) classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities, and then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, a defense method is also proposed to improve the ranking system robustness, which can mitigate all the proposed attacks simultaneously. Our adversarial ranking attacks and defense are evaluated on datasets including MNIST, Fashion-MNIST, and Stanford-Online-Products. Experimental results demonstrate that a typical deep ranking system can be effectively compromised by our attacks. Meanwhile, the system robustness can be moderately improved with our defense. Furthermore, the transferable and universal properties of our adversary illustrate the possibility of realistic black-box attack.

READ FULL TEXT
06/07/2021

Adversarial Attack and Defense in Deep Ranking

Deep Neural Network classifiers are vulnerable to adversarial attack, wh...
03/09/2021

Practical Relative Order Attack in Deep Ranking

Recent studies unveil the vulnerabilities of deep ranking models, where ...
07/09/2021

Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

Widely deployed deep neural network (DNN) models have been proven to be ...
07/09/2020

Efficient detection of adversarial images

In this paper, detection of deception attack on deep neural network (DNN...
09/14/2022

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

Neural text ranking models have witnessed significant advancement and ar...
09/19/2019

Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation

Deep Neural Network based classifiers are known to be vulnerable to pert...
04/02/2021

RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

With the development of Deep Neural Network (DNN), as well as the demand...