In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning

12/20/2014
by   Behnam Neyshabur, et al.
0

We present experiments demonstrating that some other form of capacity control, different from network size, plays a central role in learning multilayer feed-forward networks. We argue, partially through analogy to matrix factorization, that this is an inductive bias that can help shed light on deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2021

Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: the Theoretical Perspectives

The recent work by Rendle et al. (2020), based on empirical observations...
research
07/06/2023

PUFFIN: A Path-Unifying Feed-Forward Interfaced Network for Vapor Pressure Prediction

Accurately predicting vapor pressure is vital for various industrial and...
research
12/04/2018

Feed-Forward Neural Networks Need Inductive Bias to Learn Equality Relations

Basic binary relations such as equality and inequality are fundamental t...
research
10/07/2021

DeepECMP: Predicting Extracellular Matrix Proteins using Deep Learning

Introduction: The extracellular matrix (ECM) is a networkof proteins and...
research
06/22/2023

The Inductive Bias of Flatness Regularization for Deep Matrix Factorization

Recent works on over-parameterized neural networks have shown that the s...
research
06/10/2022

Intrinsic dimensionality and generalization properties of the ℛ-norm inductive bias

We study the structural and statistical properties of ℛ-norm minimizing ...
research
06/23/2023

Scaling MLPs: A Tale of Inductive Bias

In this work we revisit the most fundamental building block in deep lear...

Please sign up or login with your details

Forgot password? Click here to reset