Understanding Generalization in Deep Learning via Tensor Methods

01/14/2020
by   Jingling Li, et al.
67

Deep neural networks generalize well on unseen data though the number of parameters often far exceeds the number of training examples. Recently proposed complexity measures have provided insights to understanding the generalizability in neural networks from perspectives of PAC-Bayes, robustness, overparametrization, compression and so on. In this work, we advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective. Using tensor analysis, we propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks; thus, in practice, our generalization bound outperforms the previous compression-based ones, especially for neural networks using tensors as their weight kernels (e.g. CNNs). Moreover, these intuitive measurements provide further insights into designing neural network architectures with properties favorable for better/guaranteed generalizability. Our experimental results demonstrate that through the proposed measurable properties, our generalization error bound matches the trend of the test error well. Our theoretical analysis further provides justifications for the empirical success and limitations of some widely-used tensor-based compression approaches. We also discover the improvements to the compressibility and robustness of current neural networks when incorporating tensor operations via our proposed layer-wise structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2022

How to Train Unstable Looped Tensor Network

A rising problem in the compression of Deep Neural Networks is how to re...
research
10/02/2019

Towards Unifying Neural Architecture Space Exploration and Generalization

In this paper, we address a fundamental research question of significant...
research
06/29/2020

Hybrid Tensor Decomposition in Neural Network Compression

Deep neural networks (DNNs) have enabled impressive breakthroughs in var...
research
04/15/2018

Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

The deployment of state-of-the-art neural networks containing millions o...
research
04/16/2018

Compressibility and Generalization in Large-Scale Deep Learning

Modern neural networks are highly overparameterized, with capacity to su...
research
07/02/2021

Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning

We present a new framework to measure the intrinsic properties of (deep)...
research
02/14/2018

Stronger generalization bounds for deep nets via a compression approach

Deep nets generalize well despite having more parameters than the number...

Please sign up or login with your details

Forgot password? Click here to reset