Sparsifying networks by traversing Geodesics

12/12/2020
by   Guruprasad Raghavan, et al.
0

The geometry of weight spaces and functional manifolds of neural networks play an important role towards 'understanding' the intricacies of ML. In this paper, we attempt to solve certain open questions in ML, by viewing them through the lens of geometry, ultimately relating it to the discovery of points or paths of equivalent function in these spaces. We propose a mathematical framework to evaluate geodesics in the functional space, to find high-performance paths from a dense network to its sparser counterpart. Our results are obtained on VGG-11 trained on CIFAR-10 and MLP's trained on MNIST. Broadly, we demonstrate that the framework is general, and can be applied to a wide variety of problems, ranging from sparsification to alleviating catastrophic forgetting.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

06/05/2021

Solving hybrid machine learning tasks by traversing weight space geodesics

Machine learning problems have an intrinsic geometric structure as centr...
12/01/2009

Hodge Theory on Metric Spaces

Hodge theory is a beautiful synthesis of geometry, topology, and analysi...
04/30/2022

Engineering flexible machine learning systems by traversing functionally invariant paths in weight space

Deep neural networks achieve human-like performance on a variety of perc...
02/08/2018

Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting

In this paper we propose an approach to avoiding catastrophic forgetting...
05/20/2017

Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

A long-term goal of AI is to produce agents that can learn a diversity o...
06/22/2022

Neural Networks as Paths through the Space of Representations

Deep neural networks implement a sequence of layer-by-layer operations t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.