Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth

10/29/2020
by   Thao Nguyen, et al.
73

A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.

READ FULL TEXT

page 4

page 5

page 6

page 15

page 16

page 17

page 18

page 20

research
02/15/2022

On the Origins of the Block Structure Phenomenon in Neural Network Representations

Recent work has uncovered a striking phenomenon in large-capacity neural...
research
05/18/2017

Building effective deep neural network architectures one feature at a time

Successful training of convolutional neural networks is often associated...
research
06/11/2021

The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective

Large width limits have been a recent focus of deep learning research: m...
research
02/28/2022

How and what to learn:The modes of machine learning

We proposal a new approach, namely the weight pathway analysis (WPA), to...
research
09/03/2020

Error estimate for a universal function approximator of ReLU network with a local connection

Neural networks have shown high successful performance in a wide range o...
research
06/18/2021

The Principles of Deep Learning Theory

This book develops an effective theory approach to understanding deep ne...
research
11/25/2022

The smooth output assumption, and why deep networks are better than wide ones

When several models have similar training scores, classical model select...

Please sign up or login with your details

Forgot password? Click here to reset