Physical world assistive signals for deep neural network classifiers – neither defense nor attack

05/03/2021
by   Camilo Pestana, et al.
5

Deep Neural Networks lead the state of the art of computer vision tasks. Despite this, Neural Networks are brittle in that small changes in the input can drastically affect their prediction outcome and confidence. Consequently and naturally, research in this area mainly focus on adversarial attacks and defenses. In this paper, we take an alternative stance and introduce the concept of Assistive Signals, which are optimized to improve a model's confidence score regardless if it's under attack or not. We analyse some interesting properties of these assistive perturbations and extend the idea to optimize assistive signals in the 3D space for real-life scenarios simulating different lighting conditions and viewing angles. Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. In addition, our Assistive Signals illustrate the intrinsic bias of ML models towards certain patterns in real-life objects. We discuss how we can exploit these insights to re-think, or avoid, some patterns that might contribute to, or degrade, the detectability of objects in the real-world.

READ FULL TEXT

page 1

page 2

page 6

page 7

research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...
research
11/03/2022

Physically Adversarial Attacks and Defenses in Computer Vision: A Survey

Although Deep Neural Networks (DNNs) have been widely applied in various...
research
12/10/2020

SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers

Light-based adversarial attacks aim to fool deep learning-based image cl...
research
06/02/2020

Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense

Deep neural networks are learning models having achieved state of the ar...
research
04/02/2022

Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs

In the physical world, light affects the performance of deep neural netw...
research
08/13/2022

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

Backdoor attacks have been shown to be a serious security threat against...
research
07/27/2023

NSA: Naturalistic Support Artifact to Boost Network Confidence

Visual AI systems are vulnerable to natural and synthetic physical corru...

Please sign up or login with your details

Forgot password? Click here to reset