Security Analysis of Capsule Network Inference using Horizontal Collaboration

09/22/2021
by   Adewale Adeyemo, et al.
3

The traditional convolution neural networks (CNN) have several drawbacks like the Picasso effect and the loss of information by the pooling layer. The Capsule network (CapsNet) was proposed to address these challenges because its architecture can encode and preserve the spatial orientation of input images. Similar to traditional CNNs, CapsNet is also vulnerable to several malicious attacks, as studied by several researchers in the literature. However, most of these studies focus on single-device-based inference, but horizontally collaborative inference in state-of-the-art systems, like intelligent edge services in self-driving cars, voice controllable systems, and drones, nullify most of these analyses. Horizontal collaboration implies partitioning the trained CNN models or CNN tasks to multiple end devices or edge nodes. Therefore, it is imperative to examine the robustness of the CapsNet against malicious attacks when deployed in horizontally collaborative environments. Towards this, we examine the robustness of the CapsNet when subjected to noise-based inference attacks in a horizontal collaborative environment. In this analysis, we perturbed the feature maps of the different layers of four DNN models, i.e., CapsNet, Mini-VGG, LeNet, and an in-house designed CNN (ConvNet) with the same number of parameters as CapsNet, using two types of noised-based attacks, i.e., Gaussian Noise Attack and FGSM noise attack. The experimental results show that similar to the traditional CNNs, depending upon the access of the attacker to the DNN layer, the classification accuracy of the CapsNet drops significantly. For example, when Gaussian Noise Attack classification is performed at the DigitCap layer of the CapsNet, the maximum classification accuracy drop is approximately 97

READ FULL TEXT

page 1

page 3

research
06/16/2020

How Secure is Distributed Convolutional Neural Network on IoT Edge Devices?

Convolutional Neural Networks (CNN) has found successful adoption in man...
research
04/08/2023

RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks

Capsule Networks (CapsNets) are able to hierarchically preserve the pose...
research
12/13/2022

Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy

Cloud-edge collaborative inference approach splits deep neural networks ...
research
12/21/2020

Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

Convolutional neural networks (CNN) have been more and more applied in m...
research
06/13/2021

FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack

Convolutional Neural Networks (CNN) have shown impressive performance in...
research
07/22/2022

Distributed Deep Learning Inference Acceleration using Seamless Collaboration in Edge Computing

This paper studies inference acceleration using distributed convolutiona...
research
07/29/2023

Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise

Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, i...

Please sign up or login with your details

Forgot password? Click here to reset