On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems

05/21/2022
by   Soma Kontár, et al.
1

Deep neural networks were applied with success in a myriad of applications, but in safety critical use cases adversarial attacks still pose a significant threat. These attacks were demonstrated on various classification and detection tasks and are usually considered general in a sense that arbitrary network outputs can be generated by them. In this paper we will demonstrate through simple case studies both in simulation and in real-life, that patch based attacks can be utilised to alter the output of segmentation networks. Through a few examples and the investigation of network complexity, we will also demonstrate that the number of possible output maps which can be generated via patch-based attacks of a given size is typically smaller than the area they effect or areas which should be attacked in case of practical applications. We will prove that based on these results most patch-based attacks cannot be general in practice, namely they can not generate arbitrary output maps or if they could, they are spatially limited and this limit is significantly smaller than the receptive field of the patches.

READ FULL TEXT

page 4

page 5

page 6

research
06/16/2022

Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey

Adversarial attacks in deep learning models, especially for safety-criti...
research
05/22/2023

Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

State-of-the-art deep neural networks have proven to be highly powerful ...
research
08/06/2023

SAAM: Stealthy Adversarial Attack on Monoculor Depth Estimation

In this paper, we investigate the vulnerability of MDE to adversarial pa...
research
09/13/2022

Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

Adversarial patch attacks are an emerging security threat for real world...
research
10/22/2019

Attacking Optical Flow

Deep neural nets achieve state-of-the-art performance on the problem of ...
research
12/12/2022

Carpet-bombing patch: attacking a deep network without usual requirements

Although deep networks have shown vulnerability to evasion attacks, such...
research
07/26/2023

Defending Adversarial Patches via Joint Region Localizing and Inpainting

Deep neural networks are successfully used in various applications, but ...

Please sign up or login with your details

Forgot password? Click here to reset