Toward Defensive Letter Design

09/04/2023
by   Rentaro Kataoka, et al.
0

A major approach for defending against adversarial attacks aims at controlling only image classifiers to be more resilient, and it does not care about visual objects, such as pandas and cars, in images. This means that visual objects themselves cannot take any defensive actions, and they are still vulnerable to adversarial attacks. In contrast, letters are artificial symbols, and we can freely control their appearance unless losing their readability. In other words, we can make the letters more defensive to the attacks. This paper poses three research questions related to the adversarial vulnerability of letter images: (1) How defensive are the letters against adversarial attacks? (2) Can we estimate how defensive a given letter image is before attacks? (3) Can we control the letter images to be more defensive against adversarial attacks? For answering the first and second questions, we measure the defensibility of letters by employing Iterative Fast Gradient Sign Method (I-FGSM) and then build a deep regression model for estimating the defensibility of each letter image. We also propose a two-step method based on a generative adversarial network (GAN) for generating character images with higher defensibility, which solves the third research question.

READ FULL TEXT
research
02/04/2020

Minimax Defense against Gradient-based Adversarial Attacks

State-of-the-art adversarial attacks are aimed at neural network classif...
research
02/27/2020

Defense-PointNet: Protecting PointNet Against Adversarial Attacks

Despite remarkable performance across a broad range of tasks, neural net...
research
03/02/2018

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...
research
03/06/2023

Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks

Adversarial attacks on a convolutional neural network (CNN) – injecting ...
research
03/03/2021

A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks

We propose a novel defensive mechanism based on a generative adversarial...
research
03/03/2020

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

Face modification systems using deep learning have become increasingly p...
research
10/04/2019

Requirements for Developing Robust Neural Networks

Validation accuracy is a necessary, but not sufficient, measure of a neu...

Please sign up or login with your details

Forgot password? Click here to reset