Downscaling Attack and Defense: Turning What You See Back Into What You Get

10/06/2020
by   Andrew J. Lohn, et al.
0

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable. We show that defenses exist and are trivial to administer provided that defenders are aware of the threat. These attacks and defenses help to establish the role of input sanitization in machine learning.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

research
06/30/2021

Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges

We provide a comprehensive overview of adversarial machine learning focu...
research
02/03/2023

A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification

Deep learning achieves outstanding results in many machine learning task...
research
06/15/2022

Architectural Backdoors in Neural Networks

Machine learning is vulnerable to adversarial manipulation. Previous lit...
research
09/01/2023

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

As Large Language Models quickly become ubiquitous, it becomes critical ...
research
03/26/2018

Clipping free attacks against artificial neural networks

During the last years, a remarkable breakthrough has been made in AI dom...
research
11/22/2021

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...
research
03/19/2020

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

Backdoors and poisoning attacks are a major threat to the security of ma...

Please sign up or login with your details

Forgot password? Click here to reset