How and When Random Feedback Works: A Case Study of Low-Rank Matrix Factorization

11/17/2021
by   Shivam Garg, et al.
0

The success of gradient descent in ML and especially for learning neural networks is remarkable and robust. In the context of how the brain learns, one aspect of gradient descent that appears biologically difficult to realize (if not implausible) is that its updates rely on feedback from later layers to earlier layers through the same connections. Such bidirected links are relatively few in brain networks, and even when reciprocal connections exist, they may not be equi-weighted. Random Feedback Alignment (Lillicrap et al., 2016), where the backward weights are random and fixed, has been proposed as a bio-plausible alternative and found to be effective empirically. We investigate how and when feedback alignment (FA) works, focusing on one of the most basic problems with layered structure – low-rank matrix factorization. In this problem, given a matrix Y_n× m, the goal is to find a low rank factorization Z_n × rW_r × m that minimizes the error ZW-Y_F. Gradient descent solves this problem optimally. We show that FA converges to the optimal solution when r≥(Y). We also shed light on how FA works. It is observed empirically that the forward weight matrices and (random) feedback matrices come closer during FA updates. Our analysis rigorously derives this phenomenon and shows how it facilitates convergence of FA. We also show that FA can be far from optimal when r < (Y). This is the first provable separation result between gradient descent and FA. Moreover, the representations found by gradient descent and FA can be almost orthogonal even when their error ZW-Y_F is approximately equal.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset