DeepAI AI Chat
Log In Sign Up

Can we avoid Double Descent in Deep Neural Networks?

by   Victor Quétu, et al.

Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the “double descent”, has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse, and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off? Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple ℓ_2 regularization is already positively contributing to such a perspective.


page 1

page 2

page 3

page 4


The Quest of Finding the Antidote to Sparse Double Descent

In energy-efficient schemes, finding the optimal size of deep learning m...

Geometric Regularization from Overparameterization explains Double Descent and other findings

The volume of the distribution of possible weight configurations associa...

Sparse Double Descent in Vision Transformers: real or phantom threat?

Vision transformers (ViT) have been of broad interest in recent theoreti...

Dropout Drops Double Descent

In this paper, we find and analyze that we can easily drop the double de...

Unifying Grokking and Double Descent

A principled understanding of generalization in deep learning may requir...

VC Theoretical Explanation of Double Descent

There has been growing interest in generalization performance of large m...

Dodging the Sparse Double Descent

This paper presents an approach to addressing the issue of over-parametr...