Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

03/14/2022
by   Giulio Rossolini, et al.
0

This work presents Z-Mask, a robust and effective strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial attacks. The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image. To this end, spatially contiguous activations are examined in shallow and deep layers to suggest potential adversarial regions. Such proposals are then aggregated through a multi-thresholding mechanism. The effectiveness of Z-Mask is evaluated with an extensive set of experiments carried out on models for both semantic segmentation and object detection. The evaluation is performed with both digital patches added to the input images and printed patches positioned in the real world. The obtained results confirm that Z-Mask outperforms the state-of-the-art methods in terms of both detection accuracy and overall performance of the networks under attack. Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.

READ FULL TEXT

page 2

page 9

page 10

page 19

page 20

page 21

page 23

page 24

research
04/21/2022

A Mask-Based Adversarial Defense Scheme

Adversarial attacks hamper the functionality and accuracy of Deep Neural...
research
08/13/2021

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Deep learning and convolutional neural networks allow achieving impressi...
research
11/21/2021

Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Deep learning-based facial recognition (FR) models have demonstrated sta...
research
02/27/2023

CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World

Patch-based physical attacks have increasingly aroused concerns. Howev...
research
12/17/2019

APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

Physical adversarial attacks threaten to fool object detection systems, ...
research
04/17/2019

Adversarial Defense Through Network Profiling Based Path Extraction

Recently, researchers have started decomposing deep neural network model...
research
08/13/2020

Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness

In this paper, we present a strategy for training convolutional neural n...

Please sign up or login with your details

Forgot password? Click here to reset