Scaling Laws Beyond Backpropagation

10/26/2022
by   Alessandro Cappelli, et al.
0

Alternatives to backpropagation have long been studied to better understand how biological brains may learn. Recently, they have also garnered interest as a way to train neural networks more efficiently. By relaxing constraints inherent to backpropagation (e.g., symmetric feedforward and feedback weights, sequential updates), these methods enable promising prospects, such as local learning. However, the tradeoffs between different methods in terms of final task performance, convergence speed, and ultimately compute and data requirements are rarely outlined. In this work, we use scaling laws to study the ability of Direct Feedback Alignment (DFA) to train causal decoder-only Transformers efficiently. Scaling laws provide an overview of the tradeoffs implied by a modeling decision, up to extrapolating how it might transfer to increasingly large models. We find that DFA fails to offer more efficient scaling than backpropagation: there is never a regime for which the degradation in loss incurred by using DFA is worth the potential reduction in compute budget. Our finding comes at variance with previous beliefs in the alternative training methods community, and highlights the need for holistic empirical approaches to better understand modeling decisions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2020

Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment

The scaling hypothesis motivates the expansion of models past trillions ...
research
09/24/2021

Is the Number of Trainable Parameters All That Actually Matters?

Recent work has identified simple empirical scaling laws for language mo...
research
06/02/2023

Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

In the quest to enhance the efficiency and bio-plausibility of training ...
research
01/30/2019

Direct Feedback Alignment with Sparse Connections for Local Learning

Recent advances in deep neural networks (DNNs) owe their success to trai...
research
10/28/2020

Scaling Laws for Autoregressive Generative Modeling

We identify empirical scaling laws for the cross-entropy loss in four do...
research
06/23/2020

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Despite being the workhorse of deep learning, the backpropagation algori...
research
04/07/2021

Scaling Scaling Laws with Board Games

The largest experiments in machine learning now require resources far be...

Please sign up or login with your details

Forgot password? Click here to reset