Theoretical Analysis of Inductive Biases in Deep Convolutional Networks

05/15/2023
by   Zihao Wang, et al.
0

In this paper, we study the inductive biases in convolutional neural networks (CNNs), which are believed to be vital drivers behind CNNs' exceptional performance on vision-like tasks. We first analyze the universality of CNNs, i.e., the ability to approximate continuous functions. We prove that a depth of 𝒪(log d) is sufficient for achieving universality, where d is the input dimension. This is a significant improvement over existing results that required a depth of Ω(d). We also prove that learning sparse functions with CNNs needs only 𝒪̃(log^2d) samples, indicating that deep CNNs can efficiently capture long-range sparse correlations. Note that all these are achieved through a novel combination of increased network depth and the utilization of multichanneling and downsampling. Lastly, we study the inductive biases of weight sharing and locality through the lens of symmetry. To separate two biases, we introduce locally-connected networks (LCNs), which can be viewed as CNNs without weight sharing. Specifically, we compare the performance of CNNs, LCNs, and fully-connected networks (FCNs) on a simple regression task. We prove that LCNs require Ω(d) samples while CNNs need only 𝒪̃(log^2d) samples, which highlights the cruciality of weight sharing. We also prove that FCNs require Ω(d^2) samples while LCNs need only 𝒪̃(d) samples, demonstrating the importance of locality. These provable separations quantify the difference between the two biases, and our major observation behind is that weight sharing and locality break different symmetries in the learning process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2021

Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training

Recently, vision Transformers (ViTs) are developing rapidly and starting...
research
06/06/2019

Covariance in Physics and Convolutional Neural Networks

In this proceeding we give an overview of the idea of covariance (or equ...
research
04/27/2021

Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?

Pruning methods can considerably reduce the size of artificial neural ne...
research
09/05/2013

Improvements to deep convolutional neural networks for LVCSR

Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Ne...
research
10/31/2022

Studying inductive biases in image classification task

Recently, self-attention (SA) structures became popular in computer visi...
research
06/22/2021

Towards Biologically Plausible Convolutional Networks

Convolutional networks are ubiquitous in deep learning. They are particu...
research
06/17/2020

Neural Anisotropy Directions

In this work, we analyze the role of the network architecture in shaping...

Please sign up or login with your details

Forgot password? Click here to reset