Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection

07/31/2023
by   Xuanang Chen, et al.
0

Neural ranking models (NRMs) have undergone significant development and have become integral components of information retrieval (IR) systems. Unfortunately, recent research has unveiled the vulnerability of NRMs to adversarial document manipulations, potentially exploited by malicious search engine optimization practitioners. While progress in adversarial attack strategies aids in identifying the potential weaknesses of NRMs before their deployment, the defensive measures against such attacks, like the detection of adversarial documents, remain inadequately explored. To mitigate this gap, this paper establishes a benchmark dataset to facilitate the investigation of adversarial ranking defense and introduces two types of detection tasks for adversarial documents. A comprehensive investigation of the performance of several detection baselines is conducted, which involve examining the spamicity, perplexity, and linguistic acceptability, and utilizing supervised classifiers. Experimental results demonstrate that a supervised classifier can effectively mitigate known attacks, but it performs poorly against unseen attacks. Furthermore, such classifier should avoid using query text to prevent learning the classification on relevance, as it might lead to the inadvertent discarding of relevant documents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Adversarial Ranking Attack and Defense

Deep Neural Network (DNN) classifiers are vulnerable to adversarial atta...
research
06/07/2021

Adversarial Attack and Defense in Deep Ranking

Deep Neural Network classifiers are vulnerable to adversarial attack, wh...
research
04/28/2023

Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models

Neural ranking models (NRMs) have attracted considerable attention in in...
research
05/03/2023

Towards Imperceptible Document Manipulations against Neural Ranking Models

Adversarial attacks have gained traction in order to identify potential ...
research
11/08/2022

How Fraudster Detection Contributes to Robust Recommendation

The adversarial robustness of recommendation systems under node injectio...
research
09/14/2022

Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models

Neural ranking models (NRMs) have achieved promising results in informat...

Please sign up or login with your details

Forgot password? Click here to reset