One word at a time: adversarial attacks on retrieval models

08/05/2020
by   Nisarg Raval, et al.
0

Adversarial examples, generated by applying small perturbations to input features, are widely used to fool classifiers and measure their robustness to noisy inputs. However, little work has been done to evaluate the robustness of ranking models through adversarial examples. In this work, we present a systematic approach of leveraging adversarial examples to measure the robustness of popular ranking models. We explore a simple method to generate adversarial examples that forces a ranker to incorrectly rank the documents. Using this approach, we analyze the robustness of various ranking models and the quality of perturbations generated by the adversarial attacker across two datasets. Our findings suggest that with very few token changes (1-3), the attacker can yield semantically similar perturbed documents that can fool different rankers into changing a document's score, lowering its rank by several positions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

Latent Adversarial Defence with Boundary-guided Generation

Deep Neural Networks (DNNs) have recently achieved great success in many...
research
09/14/2022

Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models

Neural ranking models (NRMs) have achieved promising results in informat...
research
06/23/2022

BERT Rankers are Brittle: a Study using Adversarial Document Perturbations

Contextual ranking models based on BERT are now well established for a w...
research
03/01/2021

Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning

In recent years, the security concerns about the vulnerability of Deep C...
research
10/07/2022

A2: Efficient Automated Attacker for Boosting Adversarial Training

Based on the significant improvement of model robustness by AT (Adversar...
research
12/16/2021

DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models

In this paper, we focus on studying robustness evaluation of Chinese que...
research
06/12/2018

Ranking Robustness Under Adversarial Document Manipulations

For many queries in the Web retrieval setting there is an on-going ranki...

Please sign up or login with your details

Forgot password? Click here to reset