Gradient Descent in Materio

05/15/2021
by   Marcus N. Boon, et al.
0

Deep learning, a multi-layered neural network approach inspired by the brain, has revolutionized machine learning. One of its key enablers has been backpropagation, an algorithm that computes the gradient of a loss function with respect to the weights in the neural network model, in combination with its use in gradient descent. However, the implementation of deep learning in digital computers is intrinsically wasteful, with energy consumption becoming prohibitively high for many applications. This has stimulated the development of specialized hardware, ranging from neuromorphic CMOS integrated circuits and integrated photonic tensor cores to unconventional, material-based computing systems. The learning process in these material systems, taking place, e.g., by artificial evolution or surrogate neural network modelling, is still a complicated and time-consuming process. Here, we demonstrate an efficient and accurate homodyne gradient extraction method for performing gradient descent on the loss function directly in the material system. We demonstrate the method in our recently developed dopant network processing units, where we readily realize all Boolean gates. This shows that gradient descent can in principle be fully implemented in materio using simple electronics, opening up the way to autonomously learning material systems.

READ FULL TEXT
research
08/15/2020

Correspondence between neuroevolution and gradient descent

We show analytically that training a neural network by stochastic mutati...
research
03/10/2022

neos: End-to-End-Optimised Summary Statistics for High Energy Physics

The advent of deep learning has yielded powerful tools to automatically ...
research
01/06/2020

Self learning robot using real-time neural networks

With the advancements in high volume, low precision computational techno...
research
01/25/2018

A New Backpropagation Algorithm without Gradient Descent

The backpropagation algorithm, which had been originally introduced in t...
research
06/08/2020

The Golden Ratio of Learning and Momentum

Gradient descent has been a central training principle for artificial ne...
research
01/12/2015

Photonic Delay Systems as Machine Learning Implementations

Nonlinear photonic delay systems present interesting implementation plat...
research
03/16/2021

Learning without gradient descent encoded by the dynamics of a neurobiological model

The success of state-of-the-art machine learning is essentially all base...

Please sign up or login with your details

Forgot password? Click here to reset