
Sparsifying networks by traversing Geodesics
The geometry of weight spaces and functional manifolds of neural network...
read it

Weight Agnostic Neural Networks
Not all neural network architectures are created equal, some perform muc...
read it

Essentially No Barriers in Neural Network Energy Landscape
Training neural networks involves finding minima of a highdimensional n...
read it

Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Linear interpolation between initial neural network parameters and conve...
read it

Novel Uncertainty Framework for Deep Learning Ensembles
Deep neural networks have become the default choice for many of the mach...
read it

The Dynamics of Differential Learning I: InformationDynamics and Task Reachability
We study the topology of the space of learning tasks, which is critical ...
read it

A neural anisotropic view of underspecification in deep learning
The underspecification of most machine learning pipelines means that we ...
read it
Solving hybrid machine learning tasks by traversing weight space geodesics
Machine learning problems have an intrinsic geometric structure as central objects including a neural network's weight space and the loss function associated with a particular task can be viewed as encoding the intrinsic geometry of a given machine learning problem. Therefore, geometric concepts can be applied to analyze and understand theoretical properties of machine learning strategies as well as to develop new algorithms. In this paper, we address three seemingly unrelated open questions in machine learning by viewing them through a unified framework grounded in differential geometry. Specifically, we view the weight space of a neural network as a manifold endowed with a Riemannian metric that encodes performance on specific tasks. By defining a metric, we can construct geodesic, minimum length, paths in weight space that represent sets of networks of equivalent or near equivalent functional performance on a specific task. We, then, traverse geodesic paths while identifying networks that satisfy a second objective. Inspired by the geometric insight, we apply our geodesic framework to 3 major applications: (i) Network sparsification (ii) Mitigating catastrophic forgetting by constructing networks with high performance on a series of objectives and (iii) Finding highaccuracy paths connecting distinct local optima of deep networks in the nonconvex loss landscape. Our results are obtained on a wide range of network architectures (MLP, VGG11/16) trained on MNIST, CIFAR10/100. Broadly, we introduce a geometric framework that unifies a range of machine learning objectives and that can be applied to multiple classes of neural network architectures.
READ FULL TEXT
Comments
There are no comments yet.