DeepAI
Log In Sign Up

Redundancy in Deep Linear Neural Networks

Conventional wisdom states that deep linear neural networks benefit from expressiveness and optimization advantages over a single linear layer. This paper suggests that, in practice, the training process of deep linear fully-connected networks using conventional optimizers is convex in the same manner as a single linear fully-connected layer. This paper aims to explain this claim and demonstrate it. Even though convolutional networks are not aligned with this description, this work aims to attain a new conceptual understanding of fully-connected linear networks that might shed light on the possible constraints of convolutional settings and non-linear architectures.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

12/04/2017

An Equivalence of Fully Connected Layer and Convolutional Layer

This article demonstrates that convolutional operation can be converted ...
10/03/2020

Computational Separation Between Convolutional and Fully-Connected Networks

Convolutional neural networks (CNN) exhibit unmatched performance in a m...
02/11/2015

An exploration of parameter redundancy in deep networks with circulant projections

We explore the redundancy of parameters in deep neural networks by repla...
02/13/2019

Identity Crisis: Memorization and Generalization under Extreme Overparameterization

We study the interplay between memorization and generalization of overpa...
10/16/2020

Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?

Convolutional neural networks often dominate fully-connected counterpart...
06/27/2019

A shallow residual neural network to predict the visual cortex response

Understanding how the visual cortex of the human brain really works is s...
03/23/2018

Lifting Layers: Analysis and Applications

The great advances of learning-based approaches in image processing and ...

1 Introduction

Linear networks provide a simple and elegant solution for elementary problems such as simple classification [6, 29] and regression problems [28, 8]

. Moreover, linear models are suitable for approximate higher-level representation of the dataset distribution using AutoEncoders

[3, 27, 13, 16]. They could serve as classic denoising solvers [18]

or even datasets’ manifold estimators

[11]. However, linear models perform poorly on complex problems expressed with nonlinear properties [17, 15, 19].

On the other hand, deep neural networks with nonlinear abilities are an extensively researched field. These nonlinear models reached state-of-the-art performance on computer vision problems

[26, 20, 10, 21, 7]

and natural language processing tasks

[25, 5, 2, 4]

. Yet, the clarity of the learning process and the direct influence of the samples on the network remained quite vague. Therefore, deep linear networks provide a convenient framework to understand part of the bigger picture of deep learning.

Recent findings in theoretical deep-learning show that despite the linear mapping between the input and the output of deep linear networks, they still have optimization advantages over a single linear layer network [1, 9, 12, 22]. These findings contain an elaborated explanation for this claim using theoretical analysis and experimental demonstrations.

According to the theoretical analysis, deep linear networks have a non-convex optimization process. This paper demonstrates how this perception might be different for fully-connected linear networks. Deep fully-connected linear networks could perform a non-convex (and non-concave) optimization process. However, in practice, these networks are, indeed, going through a convex optimization process that is experimentally equivalent to a single fully-connected linear layer network.

We begin in section 2

by exposing some properties of fully connected linear networks trained with stochastic gradient descent (SGD

[23]) and experimentally support our claims in section 3. Then, in section 4, we show how these properties lead to an equivalent optimization process of a deep linear network and a single linear layer (with fully-connected architectures). Finally, in section 5, we explain why SGD with momentum [24] also performs an equivalent optimization process.

2 Proportions in fully-connected linear networks

This section will expose some properties regarding the weights of fully-connected linear networks (experimentally supported by section 3). We will use the following notations:

  1. of size - The weights of layer in the network.

  2. - The th row of .

  3. - The changes of the weights in the first step with respect to the gradient of the loss function.

For a network with a single linear layer, we will define (for two classes). Without loss of generality, we will focus on the randomly initialized weights (and not

) as the vector of weights related to the first output neuron. For an input space of size

, the size of vector is .

Claim 1

Let be a single training sample used for training a network with a single linear layer for a single step. Then there is a scalar such that:

Claim 2

Given a network with a single linear layer with randomly initialized weights and a set such that each pair corresponds to the proportional property described in Claim 1 with respect to . Training the network with the entire batch for a single step (with the same initial weights ) will result in the following equality:

Claim 3

For a deep linear network, the following statement is applied (using the previous notations):

Corollary 1

For a deep linear network:

Claim 4

For a deep linear network, we get the following for any layer in the network and any step in the training process:

3 Experimental support

To measure how close the vectors are, in terms of proportionally, for each angle in the experiments, we use instead. In addition, we are using the negative log likelihood loss function (a non-linear function). The experiments were conducted with GPU K80.

Claim 1 (support)

For a single linear layer, we will use the classes Cat and Dog of the dataset CIFAR10 [14]. For ten different initializations of a single linear layer, we randomly pick different samples. We compute the angle between and the chosen sample. The mean value of the calculated angles is

, and the standard deviation is

.

As expected, each angle is very close to or to (up to numerical errors), which indicates that , for some sample and scalar .

Claim 2 (support)

For a batch size of and a single linear network with a single layer, we get that the expression over ten initializations produces an average value of . It implies that the equation is indeed true up to numerical errors.

Claim 3 (support)

In the same manner, for multiple layers in the network (using “Architecture B” appears in Appendix B) and a batch size of , consider all possible combinations of and for . We get that for the angles above , we have an average angle of , and for the angles below , the average angle is which supports the fact that:

Claim 4 (support)

For long-term training of linear network with multiple layers, the following experiments will include the average angle between the vectors for each layer as a function of step number . Each analysis will also include a graph of the model’s accuracy over the training steps.

We will use several linear architectures for the experiments. The architectures appear in Appendix B.

[width=1]3vs5_128_1e2_N4_50.png

Figure 1: The top graph describes the average angle between and for each layer as a function of steps . The bottom graph describes the model’s accuracy.

In Figure 1, we can see the analysis for Cat versus Dog trained with a batch size of , a learning rate of , and “Architecture B” over steps. The first image shows the average angle between each pair of vectors ( for ) as a function of the iteration (the step number). There are four plots in the first image, each plot for each one of the four linear layers in the network. The presented angles are in degrees, and it is easy to spot that (up to a complement of ) the angle is below a single degree (which is extremely small). The second graph (below the first one) shows the accuracy of the same network. It implies that the angles are independent of the accuracy, and up to minor errors, they have very small values in each phase of the training. Overall it supports the claim that for any given step and layer , the following expression is true:

An additional set of experiments with multiple different settings appears in Appendix A.

Note: When calculating the angles, we normalize the vectors. For vectors with small norms in the first place, a numerical error might occur during the normalization. Therefore, wider layers might have more significant errors. Overall the error is tiny (the vertical scale of the first plot is in degrees) and usually below .

4 The optimization process

Following the above claims (section 2), for each layer , the update of the weights could be observed as a collection of the vectors (for some set of scalars ) rather than a collection of the vectors . In this case, the entire matrix depends on a single vector up to scalar multiplication. Additionally, in the classic training process, where we use stochastic gradient steps (SGD), this is true for any given step in the training process (and not only for the first step). In other words, we get that each layer is updated with a weights matrix of rank . Their multiplication does not reduce the expressiveness of the network.

In general, we can apply a simple reduction from the optimization process of fully-connected deep linear networks to the optimization process of a single fully-connected layer, as illustrated in Figure 2. Assuming that:

we can represent as the collection of the vectors . Since depends on a single vector, up to scalar multiplication, the same update in the weights could have been done with a single neuron (which also depends on a single vector of weights of the same size). Extending this conclusion for each step and layer in the original network (excluding the output layer), we get a similar optimization process of a fully-connected linear network with only a single neuron for each hidden layer. In the case of a single neuron in each hidden layer, we get a network that is not using its depth since the weights of each hidden layer is a scalar. Therefore, the network could be represented as a single linear layer in the training process as an equivalent learner to any deep fully-connected linear architecture.

[width=0.7]redan2.png

Figure 2: Visualization of the weights’ update’s equivalency transitions from a deep linear network to a single linear layer.

This section concludes that the training process of a randomly initialized fully-connected deep linear network is experimentally equivalent to a randomly initialized linear network with a single layer.

5 SGD with Momentum

In many cases, the SGD optimizer is used with momentum [24] to acquire convergence advantages. For such an optimizer, the optimization process enjoys memorization abilities based on the weights update of previous steps. The momentum algorithm is conducted as follows:

Where is the iteration index, is the learning rate, and is the momentum factor. For an iterative representation of , we get:

Assume a new optimization method with a predefined number of steps (in our case, steps), using the traditional SGD with a gentle twist:

In the new variant, is an iteration-dependent scalar. Proportional vectors would be proportional even after scalar multiplications. Therefore, the properties mentioned in section 2 and supported by section 3 are applied to the new variant of SGD defined above. Moreover, we get the following equality:

The above equality shows that taking steps using SGD with momentum produces the same state as the proposed optimization method. Additionally, both methods (SGD with momentum and the new method defined above) use the same matrices to reach that state (up to scalar multiplications). It implies similar expressive abilities, and therefore both processes are equivalent in that term.

Overall, SGD with momentum could be represented as the new variant of SGD we proposed. As explained earlier in this section, the new variant of SGD has an equivalent optimization process to a single layer. Therefore, the optimization process of SGD with momentum has an equivalent optimization process to a single fully-connected layer.

6 Summary and open problems

We experimentally demonstrated how the derivatives of the weights (represented as vectors) are proportional for the same iteration and the same layer when training a deep fully-connected network with conventional optimizers. Provided with this experimental outcome, we concluded that a deep fully-connected network trained with conventional optimizers has an equivalent optimization process to a single fully-connected layer.

Our experiments are valid solely for fully-connected networks. Intuitively, the proportion between two vectors of weights is due to the mutual input layer observations during the training. Even though convolutional layers observe local regions individually (rather than the entire input simultaneously), there are still dependencies between close areas in the image and different filters concerning the same region. Therefore, deep linear convolutional neural networks probably do not have a convex optimization process; however, these models still might demonstrate interesting optimization constraints. Testing this hypothesis in future work might equip us with more knowledge regarding linear networks in particular and convolutional behavior in general.

References

  • [1] S. Arora, N. Cohen, and E. Hazan (2018) On the optimization of deep networks: implicit acceleration by overparameterization. External Links: 1802.06509 Cited by: §1.
  • [2] D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §1.
  • [3] H. Bourlard and Y. Kamp (1988)

    Auto-association by multilayer perceptrons and singular value decomposition

    .
    Biological cybernetics 59 (4), pp. 291–294. Cited by: §1.
  • [4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. Advances in neural information processing systems 33, pp. 1877–1901. Cited by: §1.
  • [5] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
  • [6] R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin (2008) LIBLINEAR: a library for large linear classification.

    the Journal of machine Learning research

    9, pp. 1871–1874.
    Cited by: §1.
  • [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. Advances in neural information processing systems 27. Cited by: §1.
  • [8] B. Hanin and Y. Sun (2021)

    How data augmentation affects optimization for linear regression

    .
    Advances in Neural Information Processing Systems 34. Cited by: §1.
  • [9] M. Hardt and T. Ma (2018) Identity matters in deep learning. External Links: 1611.04231 Cited by: §1.
  • [10] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §1.
  • [11] G. E. Hinton, P. Dayan, and M. Revow (1997) Modeling the manifolds of images of handwritten digits. IEEE transactions on Neural Networks 8 (1), pp. 65–74. Cited by: §1.
  • [12] K. Kawaguchi (2016) Deep learning without poor local minima. External Links: 1605.07110 Cited by: §1.
  • [13] E. Kodirov, T. Xiang, and S. Gong (2017) Semantic autoencoder for zero-shot learning. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 3174–3183. Cited by: §1.
  • [14] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §3.
  • [15] Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang (2017) The expressive power of neural networks: a view from the width. Advances in neural information processing systems 30. Cited by: §1.
  • [16] Q. Meng, D. Catchpoole, D. Skillicom, and P. J. Kennedy (2017)

    Relational autoencoder for feature extraction

    .
    In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 364–371. Cited by: §1.
  • [17] M. J. Murphy and S. Dieterich (2006) Comparative performance of linear and nonlinear neural networks to predict irregular breathing. Physics in Medicine & Biology 51 (22), pp. 5903. Cited by: §1.
  • [18] A. Pretorius, S. Kroon, and H. Kamper (2018)

    Learning dynamics of linear denoising autoencoders

    .
    In International Conference on Machine Learning, pp. 4141–4150. Cited by: §1.
  • [19] M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. Sohl-Dickstein (2017) On the expressive power of deep neural networks. In international conference on machine learning, pp. 2847–2854. Cited by: §1.
  • [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1.
  • [21] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Advances in neural information processing systems 28. Cited by: §1.
  • [22] A. M. Saxe, J. L. McClelland, and S. Ganguli (2014) Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. External Links: 1312.6120 Cited by: §1.
  • [23] J. C. Spall (2005) Introduction to stochastic search and optimization: estimation, simulation, and control. Vol. 65, John Wiley & Sons. Cited by: §1.
  • [24] I. Sutskever, J. Martens, G. Dahl, and G. Hinton (2013) On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, S. Dasgupta and D. McAllester (Eds.), Proceedings of Machine Learning Research, Vol. 28, Atlanta, Georgia, USA, pp. 1139–1147. External Links: Link Cited by: §1, §5.
  • [25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. Advances in neural information processing systems 30. Cited by: §1.
  • [26] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis (2018) Deep learning for computer vision: a brief review. Computational intelligence and neuroscience 2018. Cited by: §1.
  • [27] W. Wang, Y. Huang, Y. Wang, and L. Wang (2014) Generalized autoencoder: a neural network framework for dimensionality reduction. In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Vol. , pp. 496–503. External Links: Document Cited by: §1.
  • [28] S. Weisberg (2005) Applied linear regression. Vol. 528, John Wiley & Sons. Cited by: §1.
  • [29] T. Zhang and F. J. Oles (2001) Text categorization based on regularized linear classification methods. Information retrieval 4 (1), pp. 5–31. Cited by: §1.

Appendix A Additional experiments

In Figures 3,4,5,6,7, we can see the graph analysis of various cases with various learning rates, batch sizes, architectures (see Appendix B), categories (of CIFAR10), and training resolutions.

[width=1]3vs5_128_1e2_N4.png

Figure 3: Cat versus Dog - a batch size of , a learning rate of , and Architecture B trained for steps.

[width=1]8vs9_256_1e4_N2_50.png

Figure 4: Ship versus Truck - a batch size of , a learning rate of , and Architecture A trained for steps.

[width=1]8vs9_256_1e4_N2.png

Figure 5: Ship versus Truck - a batch size of , a learning rate of , and Architecture A trained for steps.

[width=1]0vs1_64_1e1_N8_50.png

Figure 6: Airplane versus Automobile - a batch size of , a learning rate of , and Architecture C trained for steps.

[width=1]0vs1_64_1e1_N8.png

Figure 7: Airplane versus Automobile - a batch size of , a learning rate of , and Architecture C trained for steps.

Appendix B Architectures

Architecture A:

Two linear layers.

  • Fully-connected layer [input: 3072, output: 128]

  • Fully-connected layer [input: 128, output: 2]

Architecture B:

Four linear layers.

  • Fully-connected layer [input: 3072, output: 2048]

  • Fully-connected layer [input: 2048, output: 1024]

  • Fully-connected layer [input: 1024, output: 128]

  • Fully-connected layer [input: 128, output: 2]

Architecture C:

Eight linear layers.

  • Fully-connected layer [input: 3072, output: 2500]

  • Fully-connected layer [input: 2500, output: 1500]

  • Fully-connected layer [input: 1500, output: 1024]

  • Fully-connected layer [input: 1024, output: 512]

  • Fully-connected layer [input: 512, output: 256]

  • Fully-connected layer [input: 256, output: 64]

  • Fully-connected layer [input: 64, output: 16]

  • Fully-connected layer [input: 16, output: 2]