Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

08/17/2023
by   Mirazul Haque, et al.
0

Deep Neural Networks (DNNs) have been used to solve different day-to-day problems. Recently, DNNs have been deployed in real-time systems, and lowering the energy consumption and response time has become the need of the hour. To address this scenario, researchers have proposed incorporating dynamic mechanism to static DNNs (SDNN) to create Dynamic Neural Networks (DyNNs) performing dynamic amounts of computation based on the input complexity. Although incorporating dynamic mechanism into SDNNs would be preferable in real-time systems, it also becomes important to evaluate how the introduction of dynamic mechanism impacts the robustness of the models. However, there has not been a significant number of works focusing on the robustness trade-off between SDNNs and DyNNs. To address this issue, we propose to investigate the robustness of dynamic mechanism in DyNNs and how dynamic mechanism design impacts the robustness of DyNNs. For that purpose, we evaluate three research questions. These evaluations are performed on three models and two datasets. Through the studies, we find that attack transferability from DyNNs to SDNNs is higher than attack transferability from SDNNs to DyNNs. Also, we find that DyNNs can be used to generate adversarial samples more efficiently than SDNNs. Then, through research studies, we provide insight into the design choices that can increase robustness of DyNNs against the attack generated using static model. Finally, we propose a novel attack to understand the additional attack surface introduced by the dynamic mechanism and provide design choices to improve robustness against the attack.

READ FULL TEXT

page 4

page 13

page 14

research
10/17/2022

A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information

Unlike traditional static deep neural networks (DNNs), dynamic neural ne...
research
04/01/2023

GradMDM: Adversarial Attack on Dynamic Networks

Dynamic neural networks can greatly reduce computation redundancy withou...
research
12/16/2019

CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

Deep neural networks (DNNs) are vulnerable to adversarial attack despite...
research
02/20/2021

Going Far Boosts Attack Transferability, but Do Not Do It

Deep Neural Networks (DNNs) could be easily fooled by Adversarial Exampl...
research
07/04/2018

SGAD: Soft-Guided Adaptively-Dropped Neural Network

Deep neural networks (DNNs) have been proven to have many redundancies. ...
research
09/20/2022

Audit and Improve Robustness of Private Neural Networks on Encrypted Data

Performing neural network inference on encrypted data without decryption...
research
03/29/2022

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Neural image caption generation (NICG) models have received massive atte...

Please sign up or login with your details

Forgot password? Click here to reset