Scalable and Modular Robustness Analysis of Deep Neural Networks

08/26/2021
by   Yuyi Zhong, et al.
0

As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. For example, BBPoly can analyze really large neural networks like SkipNet or ResNet which contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2018

Structure Learning of Deep Neural Networks with Q-Learning

Recently, with convolutional neural networks gaining significant achieve...
research
07/24/2019

Knowledge transfer in deep block-modular neural networks

Although deep neural networks (DNNs) have demonstrated impressive result...
research
02/27/2018

Extractive Text Summarization using Neural Networks

Text Summarization has been an extensively studied problem. Traditional ...
research
08/16/2018

BlockQNN: Efficient Block-wise Neural Network Architecture Generation

Convolutional neural networks have gained a remarkable success in comput...
research
06/15/2023

Sampling-Based Techniques for Training Deep Neural Networks with Limited Computational Resources: A Scalability Evaluation

Deep neural networks are superior to shallow networks in learning comple...
research
06/10/2019

BlockSwap: Fisher-guided Block Substitution for Network Compression

The desire to run neural networks on low-capacity edge devices has led t...

Please sign up or login with your details

Forgot password? Click here to reset