Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

12/21/2020
by   Chenchen Zhao, et al.
14

Convolutional neural networks (CNN) have been more and more applied in mobile robotics such as intelligent vehicles. Security of CNNs in robotics applications is an important issue, for which potential adversarial attacks on CNNs are worth research. Pooling is a typical step of dimension reduction and information discarding in CNNs. Such information discarding may result in mis-deletion and mis-preservation of data features which largely influence the output of the network. This may aggravate the vulnerability of CNNs to adversarial attacks. In this paper, we conduct adversarial attacks on CNNs from the perspective of network structure by investigating and exploiting the vulnerability of pooling. First, a novel adversarial attack methodology named Strict Layer-Output Manipulation (SLOM) is proposed. Then an attack method based on Strict Pooling Manipulation (SPM) which is an instantiation of the SLOM spirit is designed to effectively realize both type I and type II adversarial attacks on a target CNN. Performances of attacks based on SPM at different depths are also investigated and compared. Moreover, performances of attack methods designed by instantiating the SLOM spirit with different operation layers of CNNs are compared. Experiment results reflect that pooling tends to be more vulnerable to adversarial attacks than other operations in CNNs.

READ FULL TEXT

page 1

page 5

page 6

research
06/09/2019

On the Vulnerability of Capsule Networks to Adversarial Attacks

This paper extensively evaluates the vulnerability of capsule networks t...
research
01/26/2021

The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

In recent years, convolutional neural networks (CNNs) have been widely u...
research
05/21/2019

DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks

Recently, Convolutional Neural Networks (CNNs) demonstrate a considerabl...
research
04/12/2020

Verification of Deep Convolutional Neural Networks Using ImageStars

Convolutional Neural Networks (CNN) have redefined the state-of-the-art ...
research
12/21/2020

Blurring Fools the Network – Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring

Existing pixel-level adversarial attacks on neural networks may be defic...
research
04/27/2020

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...
research
09/22/2021

Security Analysis of Capsule Network Inference using Horizontal Collaboration

The traditional convolution neural networks (CNN) have several drawbacks...

Please sign up or login with your details

Forgot password? Click here to reset