Visual Attack and Defense on Text

08/07/2020
by   Shengjun Liu, et al.
0

Modifying characters of a piece of text to their visual similar ones often ap-pear in spam in order to fool inspection systems and other conditions, which we regard as a kind of adversarial attack to neural models. We pro-pose a way of generating such visual text attack and show that the attacked text are readable by humans but mislead a neural classifier greatly. We ap-ply a vision-based model and adversarial training to defense the attack without losing the ability to understand normal text. Our results also show that visual attack is extremely sophisticated and diverse, more work needs to be done to solve this.

READ FULL TEXT
research
03/27/2019

Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

Visual modifications to text are often used to obfuscate offensive comme...
research
08/09/2020

Fast Gradient Projection Method for Text Adversary Generation and Adversarial Training

Adversarial training has shown effectiveness and efficiency in improving...
research
09/10/2018

Second-Order Adversarial Attack and Certifiable Robustness

We propose a powerful second-order attack method that outperforms existi...
research
07/30/2023

On Neural Network approximation of ideal adversarial attack and convergence of adversarial training

Adversarial attacks are usually expressed in terms of a gradient-based o...
research
02/09/2021

Provable Defense Against Delusive Poisoning

Delusive poisoning is a special kind of attack to obstruct learning, whe...
research
09/24/2019

A Visual Analytics Framework for Adversarial Text Generation

This paper presents a framework which enables a user to more easily make...
research
04/13/2021

Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack

The increased demand for machine learning applications made companies of...

Please sign up or login with your details

Forgot password? Click here to reset