Solving hybrid machine learning tasks by traversing weight space geodesics

06/05/2021
by   Guruprasad Raghavan, et al.
0

Machine learning problems have an intrinsic geometric structure as central objects including a neural network's weight space and the loss function associated with a particular task can be viewed as encoding the intrinsic geometry of a given machine learning problem. Therefore, geometric concepts can be applied to analyze and understand theoretical properties of machine learning strategies as well as to develop new algorithms. In this paper, we address three seemingly unrelated open questions in machine learning by viewing them through a unified framework grounded in differential geometry. Specifically, we view the weight space of a neural network as a manifold endowed with a Riemannian metric that encodes performance on specific tasks. By defining a metric, we can construct geodesic, minimum length, paths in weight space that represent sets of networks of equivalent or near equivalent functional performance on a specific task. We, then, traverse geodesic paths while identifying networks that satisfy a second objective. Inspired by the geometric insight, we apply our geodesic framework to 3 major applications: (i) Network sparsification (ii) Mitigating catastrophic forgetting by constructing networks with high performance on a series of objectives and (iii) Finding high-accuracy paths connecting distinct local optima of deep networks in the non-convex loss landscape. Our results are obtained on a wide range of network architectures (MLP, VGG11/16) trained on MNIST, CIFAR-10/100. Broadly, we introduce a geometric framework that unifies a range of machine learning objectives and that can be applied to multiple classes of neural network architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2022

Engineering flexible machine learning systems by traversing functionally invariant paths in weight space

Deep neural networks achieve human-like performance on a variety of perc...
research
12/12/2020

Sparsifying networks by traversing Geodesics

The geometry of weight spaces and functional manifolds of neural network...
research
06/22/2022

Neural Networks as Paths through the Space of Representations

Deep neural networks implement a sequence of layer-by-layer operations t...
research
09/18/2023

A Geometric Framework for Neural Feature Learning

We present a novel framework for learning system design based on neural ...
research
04/22/2021

Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes

Linear interpolation between initial neural network parameters and conve...
research
10/04/2018

The Dynamics of Differential Learning I: Information-Dynamics and Task Reachability

We study the topology of the space of learning tasks, which is critical ...
research
02/04/2022

Group invariant machine learning by fundamental domain projections

We approach the well-studied problem of supervised group invariant and e...

Please sign up or login with your details

Forgot password? Click here to reset