Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations

07/14/2020
by   Rui Zhu, et al.
0

The number of linear regions is one of the distinct properties of the neural networks using piecewise linear activation functions such as ReLU, comparing with those conventional ones using other activation functions. Previous studies showed this property reflected the expressivity of a neural network family ([14]); as a result, it can be used to characterize how the structural complexity of a neural network model affects the function it aims to compute. Nonetheless, it is challenging to directly compute the number of linear regions; therefore, many researchers focus on estimating the bounds (in particular the upper bound) of the number of linear regions for deep neural networks using ReLU. These methods, however, attempted to estimate the upper bound in the entire input space. The theoretical methods are still lacking to estimate the number of linear regions within a specific area of the input space, e.g., a sphere centered at a training data point such as an adversarial example or a backdoor trigger. In this paper, we present the first method to estimate the upper bound of the number of linear regions in any sphere in the input space of a given ReLU neural network. We implemented the method, and computed the bounds in deep neural networks using the piece-wise linear active function. Our experiments showed that, while training a neural network, the boundaries of the linear regions tend to move away from the training data points. In addition, we observe that the spheres centered at the training data points tend to contain more linear regions than any arbitrary points in the input space. To the best of our knowledge, this is the first study of bounding linear regions around a specific data point. We consider our work as a first step toward the investigation of the structural complexity of deep neural networks in a specific input area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

Measuring Model Complexity of Neural Networks with Curve Activation Functions

It is fundamental to measure model complexity of deep neural networks. T...
research
12/08/2020

A General Computational Framework to Measure the Expressiveness of Complex Networks Using a Tighter Upper Bound of Linear Regions

The expressiveness of deep neural network (DNN) is a perspective to unde...
research
05/22/2018

A Tropical Approach to Neural Networks with Piecewise Linear Activations

We present a new, unifying approach following some recent developments o...
research
06/11/2020

Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks

Recent articles indicate that deep neural networks are efficient models ...
research
12/02/2022

An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws

We study the compute-optimal trade-off between model and training data s...
research
01/25/2019

Complexity of Linear Regions in Deep Networks

It is well-known that the expressivity of a neural network depends on it...
research
05/27/2019

Equivalent and Approximate Transformations of Deep Neural Networks

Two networks are equivalent if they produce the same output for any give...

Please sign up or login with your details

Forgot password? Click here to reset