Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective

06/01/2023
by   Andi Zhang, et al.
0

In this study, we introduce a novel, probabilistic viewpoint on adversarial examples, achieved through box-constrained Langevin Monte Carlo (LMC). Proceeding from this perspective, we develop an innovative approach for generating semantics-aware adversarial examples in a principled manner. This methodology transcends the restriction imposed by geometric distance, instead opting for semantic constraints. Our approach empowers individuals to incorporate their personal comprehension of semantics into the model. Through human evaluation, we validate that our semantics-aware adversarial examples maintain their inherent meaning. Experimental findings on the MNIST and SVHN datasets demonstrate that our semantics-aware adversarial examples can effectively circumvent robust adversarial training methods tailored for traditional adversarial attacks.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 13

page 14

page 15

page 16

research
09/23/2020

Semantics-Preserving Adversarial Training

Adversarial training is a defense technique that improves adversarial ro...
research
12/01/2021

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

In response to the threat of adversarial examples, adversarial training ...
research
11/08/2022

Preserving Semantics in Textual Adversarial Attacks

Adversarial attacks in NLP challenge the way we look at language models....
research
04/25/2020

Reevaluating Adversarial Examples in Natural Language

State-of-the-art attacks on NLP models have different definitions of wha...
research
07/23/2020

Scalable Inference of Symbolic Adversarial Examples

We present a novel method for generating symbolic adversarial examples: ...
research
09/09/2021

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

Research shows that natural language processing models are generally con...
research
07/17/2017

Houdini: Fooling Deep Structured Prediction Models

Generating adversarial examples is a critical step for evaluating and im...

Please sign up or login with your details

Forgot password? Click here to reset