The Dynamics of Differential Learning I: Information-Dynamics and Task Reachability

10/04/2018
by   Alessandro Achille, et al.
0

We study the topology of the space of learning tasks, which is critical to understanding transfer learning whereby a model such as a deep neural network is pre-trained on a task, and then used on a different one after some fine-tuning. First we show that using the Kolmogorov structure function we can define a distance between tasks, which is independent on any particular model used and, empirically, correlates with the semantic similarity between tasks. Then, using a path integral approximation, we show that this plays a central role in the learning dynamics of Deep Networks, and in particular in the reachability of one task from another. We show that the probability of paths connecting two tasks, is asymmetric and has a static component that depends on the geometry of the loss function, in particular on the curvature, and a dynamic component that is model dependent and relates to the ease of traversing such paths. Surprisingly, the static component corresponds to the distance derived from the Kolmogorov Structure Function. With the dynamic component, this gives strict lower bounds on the complexity necessary to learn a task starting from the solution to another. Our analysis also explains more complex phenomena where semantically similar tasks may be unreachable from one another, a phenomenon called Information Plasticity and observed in diverse learning systems such as animals and deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2019

The Information Complexity of Learning Tasks, their Structure and their Distance

We introduce an asymmetric distance in the space of learning tasks, and ...
research
02/19/2020

Distance-Based Regularisation of Deep Networks for Fine-Tuning

We investigate approaches to regularisation during fine-tuning of deep n...
research
09/27/2018

An analytic theory of generalization dynamics and transfer learning in deep linear networks

Much attention has been devoted recently to the generalization puzzle in...
research
07/28/2020

Deep frequency principle towards understanding why deeper learning is faster

Understanding the effect of depth in deep learning is a critical problem...
research
06/05/2021

Solving hybrid machine learning tasks by traversing weight space geodesics

Machine learning problems have an intrinsic geometric structure as centr...
research
01/30/2017

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

For artificial general intelligence (AGI) it would be efficient if multi...
research
10/31/2022

A picture of the space of typical learnable tasks

We develop a technique to analyze representations learned by deep networ...

Please sign up or login with your details

Forgot password? Click here to reset