DeepAI AI Chat
Log In Sign Up

A neural anisotropic view of underspecification in deep learning

The underspecification of most machine learning pipelines means that we cannot rely solely on validation performance to assess the robustness of deep learning systems to naturally occurring distribution shifts. Instead, making sure that a neural network can generalize across a large number of different situations requires to understand the specific way in which it solves a task. In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation? And, are deep networks always biased towards simpler solutions, as conjectured in recent literature? We show that the way neural networks handle the underspecification of these problems is highly dependent on the data representation, affecting both the geometry and the complexity of the learned predictors. Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

01/06/2019

Geometrization of deep networks for the interpretability of deep learning systems

How to understand deep learning systems remains an open problem. In this...
10/30/2017

How deep learning works --The geometry of deep learning

Why and how that deep learning works well on different tasks remains a m...
07/12/2019

Deep network as memory space: complexity, generalization, disentangled representation and interpretability

By bridging deep networks and physics, the programme of geometrization o...
02/11/2019

Understanding over-parameterized deep networks by geometrization

A complete understanding of the widely used over-parameterized deep netw...
08/30/2022

Robustness and invariance properties of image classifiers

Deep neural networks have achieved impressive results in many image clas...
09/22/2021

Robust Generalization of Quadratic Neural Networks via Function Identification

A key challenge facing deep learning is that neural networks are often n...
10/04/2022

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Deep Neural Networks are known to be brittle to even minor distribution ...