The Dynamics of Differential Learning I: Information-Dynamics and Task Reachability

10/04/2018
by   Alessandro Achille, et al.
0

We study the topology of the space of learning tasks, which is critical to understanding transfer learning whereby a model such as a deep neural network is pre-trained on a task, and then used on a different one after some fine-tuning. First we show that using the Kolmogorov structure function we can define a distance between tasks, which is independent on any particular model used and, empirically, correlates with the semantic similarity between tasks. Then, using a path integral approximation, we show that this plays a central role in the learning dynamics of Deep Networks, and in particular in the reachability of one task from another. We show that the probability of paths connecting two tasks, is asymmetric and has a static component that depends on the geometry of the loss function, in particular on the curvature, and a dynamic component that is model dependent and relates to the ease of traversing such paths. Surprisingly, the static component corresponds to the distance derived from the Kolmogorov Structure Function. With the dynamic component, this gives strict lower bounds on the complexity necessary to learn a task starting from the solution to another. Our analysis also explains more complex phenomena where semantically similar tasks may be unreachable from one another, a phenomenon called Information Plasticity and observed in diverse learning systems such as animals and deep neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset