Learning with Differentiable Algorithms

09/01/2022
by   Felix Petersen, et al.
71

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

READ FULL TEXT

page 9

page 10

page 21

page 25

page 28

page 34

page 37

page 40

research
10/15/2022

Deep Differentiable Logic Gate Networks

Recently, research has increasingly focused on developing efficient neur...
research
05/28/2019

Differentiable Algorithm Networks for Composable Robot Learning

This paper introduces the Differentiable Algorithm Network (DAN), a comp...
research
10/11/2021

Learning with Algorithmic Supervision via Continuous Relaxations

The integration of algorithmic components into neural architectures has ...
research
04/14/2021

Considerations Across Three Cultures: Parametric Regressions, Interpretable Algorithms, and Complex Algorithms

We consider an extension of Leo Breiman's thesis from "Statistical Model...
research
05/16/2019

AlgoNet: C^∞ Smooth Algorithmic Neural Networks

Artificial neural networks revolutionized many areas of computer science...
research
05/22/2020

Neural Bipartite Matching

Graph neural networks have found application for learning in the space o...
research
04/02/2019

Learning Algorithms via Neural Logic Networks

We propose a novel learning paradigm for Deep Neural Networks (DNN) by u...

Please sign up or login with your details

Forgot password? Click here to reset