Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method

05/15/2023
by   Jinyang Jiang, et al.
0

Backpropagation (BP) is the most important gradient estimation method for training neural networks in deep learning. However, the literature shows that neural networks trained by BP are vulnerable to adversarial attacks. We develop the likelihood ratio (LR) method, a new gradient estimation method, for training a broad range of neural network architectures, including convolutional neural networks, recurrent neural networks, graph neural networks, and spiking neural networks, without recursive gradient computation. We propose three methods to efficiently reduce the variance of the gradient estimation in the neural network training process. Our experiments yield numerical results for training different neural networks on several datasets. All results demonstrate that the LR method is effective for training various neural networks and significantly improves the robustness of the neural networks under adversarial attacks relative to the BP method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2019

Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Adversarial Defensiveness

Recent work in deep learning has shown that the artificial neural networ...
research
07/20/2021

An induction proof of the backpropagation algorithm in matrix notation

Backpropagation (BP) is a core component of the contemporary deep learni...
research
08/14/2022

Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks

Lateral inhibitory connections have been observed in the cortex of the b...
research
12/18/2015

Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

Learning deeper convolutional neural networks becomes a tendency in rece...
research
12/19/2019

Optimization for deep learning: theory and algorithms

When and why can a neural network be successfully trained? This article ...
research
07/10/2023

Self Expanding Neural Networks

The results of training a neural network are heavily dependent on the ar...
research
06/26/2020

Relative gradient optimization of the Jacobian term in unsupervised deep learning

Learning expressive probabilistic models correctly describing the data i...

Please sign up or login with your details

Forgot password? Click here to reset