Survey of Expressivity in Deep Neural Networks

11/24/2016 ∙ by Maithra Raghu, et al. ∙ 0

We survey results on neural network expressivity described in "On the Expressive Power of Deep Neural Networks". The paper motivates and develops three natural measures of expressiveness, which all display an exponential dependence on the depth of the network. In fact, all of these measures are related to a fourth quantity, trajectory length. This quantity grows exponentially in the depth of the network, and is responsible for the depth sensitivity observed. These results translate to consequences for networks during and after training. They suggest that parameters earlier in a network have greater influence on its expressive power -- in particular, given a layer, its influence on expressivity is determined by the remaining depth of the network after that layer. This is verified with experiments on MNIST and CIFAR-10. We also explore the effect of training on the input-output map, and find that it trades off between the stability and expressivity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Motivation and Setting

In this survey, we summarize results on the expressivity of deep neural networks from [1]. Neural network expressivity looks at how the architecture of the network (width, depth, connectivity) affects the properties of the resulting function.

Being a fundamental step to better understanding neural networks, there is much prior work in this area. Many of the existing results rely on comparing achievable functions of a particular network architecture, ([2, 3], [4, 5, 6, 7]). While compelling, these results also highlight limitations of much of the existing work on expressivity – unrealistic assumptions are sometimes made about the architectural shape e.g. exponentially large width, and networks are often compared via their ability to approximate one specific function, which, in isolation, cannot result in a more general conclusion.

To overcome this, we start by analyzing expressiveness in a setting which is both more general than one of hardcoded functions, and immediately related to practice – networks after random initialization. Not only does this mean conclusions are independent of specific weight settings, but understanding behavior at random initialization provides a natural baseline to compare to the effects of training and trained networks, which we summarize in Sections 3, 4.

Companion Paper

In a companion paper, [8], the propagation of Riemannian curvature through random networks is studied by developing a mean field theory approach, which quantitatively supports the conjecture that deep networks can disentangle curved manifolds in input space.

2 Random networks

The results on networks after random initialization examine the effect of depth and width of a network architecture on its expressive power after random initialization via three natural measures of functional richness, number of transitions, activation patterns, and dichotomies. More precisely, fully connected networks of input dimension , depth and width are studied, with weights, bias randomly initialized as .

2.1 Measures of Expressivity

In more detail, the measures of expressivity are:

Transitions:

Counting neuron transitions is introduced indirectly via linear regions in

[9]

, and provides a tractable method to estimate the non-linearity of the computed function.

Activation Patterns: Transitions of a single neuron can be extended to the outputs of all neurons in all layers, leading to the (global) definition of a network activation pattern, also a measure of non-linearity. Network activation patterns directly show how the network partitions input space (into convex polytopes), through connections to the theory of hyperplane arrangements, Figure 1.

[width=0.8]figures/hyper_fin_027.pdf

Figure 1:

Deep networks with piecewise linear activations subdivide input space into convex polytopes. Here we plot the boundaries in input space separating unit activation and inactivation for all units in a three layer ReLU network, with four units in each layer. The left pane shows activation boundaries (corresponding to a hyperplane arrangement) in gray for the first layer only, partitioning the plane into regions. The center pane shows activation boundaries for the first two layers. Inside

every first layer region, the second layer activation boundaries form a different hyperplane arrangement. The right pane shows activation boundaries for the first three layers, with different hyperplane arrangements inside all first and second layer regions. This final set of convex regions correspond to different activation patterns of the network – i.e. different linear functions.

Dichotomies: The heterogeneity of a generic class of functions from a particular architecture is also measured, by counting the number of dichotomies seen for a fixed set of inputs. This measure is ‘statistically dual’ to sweeping input in some cases.

The paper shows that all three measures grow exponentially with the depth of the network, but not with the width.

Connection to Trajectory Length

In fact, this is due to an underlying connection of all three measures to another quantity, trajectory length – how a 1-D curve in input space changes in length as it propagates through the network. It is proved [1] that the trajectory length of an input grows exponentially in the depth of a network but not the width:

Theorem 1.

Bound on Growth of Trajectory Length Let be a hard tanh random neural network and a one dimensional trajectory in input space. Define to be the image of the trajectory in layer of , and let be the arc length of . Then

This is also verified empirically (Figure 2).

(a)[width=0.4]figures/varyscl_distance_v_depth_io.pdf (b)[width=0.4]figures/varywidth_distance_v_depth_io.pdf
(c)[width=0.4]figures/bounds_vs_experiment_varyK.pdf (d)[width=0.4]figures/bounds_vs_experiment_varysigma.pdf
Figure 2:

The exponential growth of trajectory length with depth, in a random deep network with hard-tanh nonlinearities. A circular trajectory is chosen between two random vectors. The image of that trajectory is taken at each layer of the network, and its length measured.

(a,b) The trajectory length vs. layer, in terms of the network width

and weight variance

, both of which determine its growth rate. (c,d) The average ratio of a trajectory’s length in layer relative to its length in layer . The solid line shows simulated data, while the dashed lines show upper and lower bounds (Theorem 1). Growth rate is a function of layer width , and weight variance .

Theoretical intuition is the provided for the direct proportionality of transitions, activation patterns and dichotomies to trajectory length, and is further confirmed through experiments ([1]).

[width=1.07]figures/transitions_vs_lengthstats_io.pdf

Figure 3: The number of transitions is linear in trajectory length. Here we compare the empirical number of sign changes to the length of the trajectory, for images of the same trajectory at different layers of a hard-tanh network. We repeat this comparison for a variety of network architectures, with different network width and weight variance .

3 The effect of Training: Trading Off Expressivity and Stability

The paper then ([1]) explores the effect of training on the measures of expressivity. Most importantly, note that an exponential depth dependence, as demonstrated at the start of training, makes the resulting function very sensitive to perturbations, not a desired feature in a trained network.

When weights are initialized with large , training increases stability by reducing trajectory length and transitions during the training process (Figure 4).

[width=0.75]figures/Trajectory_length_MNIST_random_log_plt_2.pdf

Figure 4: Training acts to stabilize the input-output map by decreasing trajectory length for

large. The left pane plots the growth of trajectory length as a circular interpolation between two MNIST datapoints is propagated through the network, at different train steps. Red indicates the start of training, with purple the end of training. Interestingly, and supporting the observation on remaining depth, the first layer appears to increase trajectory length, in contrast with all later layers, suggesting it is being primarily used to fit the data. The right pane shows an identical plot but for an interpolation between

random points, which also display decreasing trajectory length, but at a slower rate. Note the output layer is not plotted, due to artificial scaling of length through normalization. The network is initialized with . A similar plot is observed for the number of transitions (see Appendix.)

When the network is initialized with too small a however, this also has the potential to adversely affect performance as the function at initialization might not offer enough expressiveness to fit the target. In this case, we see that the training process monotonically increases the trajectory length and number of transitions (Figure 5.)

[width=0.75]figures/Trajectory_length_MNIST_random_log_plt_sigma_3.pdf

Figure 5: Training increases expressivity of input-output map for small. The left pane plots the growth of trajectory length as a circular interpolation between two MNIST datapoints is propagated through the network, at different train steps. Red indicates the start of training, with purple the end of training. We see that the training process increases trajectory length, likely to increase the expressivity of the input-output map to enable greater accuracy. The right pane shows an identical plot but for an interpolation between random points, which also displays increasing trajectory length, but at a slower rate. Note the output layer is not plotted, due to artificial scaling of length through normalization. The network is initialized with .

In summary, the paper [1] concludes that training trades off between achieving enough expressiveness and simultaneously trying to maintain stability.

4 Trained Networks: Power of Remaining Depth

The expanding trajectory length suggests that the effect of parameter choices earlier in earlier layers is amplified by later layers. Combining this with the exponential increase in dichotomies with depth, this suggests that the expressive power of the parameters, and thus layers, is related to the remaining depth of the network after that layer. The paper demonstrates this in practice, with experiments on MNIST and CIFAR-10 (Figure 6).

[width=0.75]figures/Hard_tanh_acc_after_norm_final_sigma_2.pdf

Figure 6: Demonstration of expressive power of remaining depth on MNIST. Here we plot train and test accuracy achieved by training exactly one layer of a fully connected neural net on MNIST. The different lines are generated by varying the hidden layer chosen to train. All other layers are kept frozen after random initialization.

References