Color and Edge-Aware Adversarial Image Perturbations

08/28/2020
by   Robert Bassett, et al.
6

Adversarial perturbation of images, in which a source image is deliberately modified with the intent of causing a classifier to misclassify the image, provides important insight into the robustness of image classifiers. In this work we develop two new methods for constructing adversarial perturbations, both of which are motivated by minimizing human ability to detect changes between the perturbed and source image. The first of these, the Edge-Aware method, reduces the magnitude of perturbations permitted in smooth regions of an image where changes are more easily detected. Our second method, the Color-Aware method, performs the perturbation in a color space which accurately captures human ability to distinguish differences in colors, thus reducing the perceived change. The Color-Aware and Edge-Aware methods can also be implemented simultaneously, resulting in image perturbations which account for both human color perception and sensitivity to changes in homogeneous regions. Though Edge-Aware and Color-Aware modifications exist for many image perturbations techniques, we focus on easily computed perturbations. We empirically demonstrate that the Color-Aware and Edge-Aware perturbations we consider effectively cause misclassification, are less distinguishable to human perception, and are as easy to compute as the most efficient image perturbation techniques. Code and demo available at https://github.com/rbassett3/Color-and-Edge-Aware-Perturbations

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

research
01/14/2021

Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation

This paper presents a novel context-aware image denoising algorithm that...
research
04/20/2023

Edge-Aware Image Color Appearance and Difference Modeling

The perception of color is one of the most important aspects of human vi...
research
02/03/2020

A Differentiable Color Filter for Generating Unrestricted Adversarial Images

We propose Adversarial Color Filtering (AdvCF), an approach that uses a ...
research
11/06/2019

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

The success of image perturbations that are designed to fool image class...
research
12/07/2021

Image classifiers can not be made robust to small perturbations

The sensitivity of image classifiers to small perturbations in the input...
research
03/02/2022

Detecting Adversarial Perturbations in Multi-Task Perception

While deep neural networks (DNNs) achieve impressive performance on envi...
research
12/03/2020

Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing

Adversaries are capable of adding perturbations to an image to fool mach...

Please sign up or login with your details

Forgot password? Click here to reset