A Theoretical View of Linear Backpropagation and Its Convergence

12/21/2021
by   Ziang Li, et al.
0

Backpropagation is widely used for calculating gradients in deep neural networks (DNNs). Applied often along with stochastic gradient descent (SGD) or its variants, backpropagation is considered as a de-facto choice in a variety of machine learning tasks including DNN training and adversarial attack/defense. Recently, a linear variant of BP named LinBP was introduced for generating more transferable adversarial examples for black-box adversarial attacks, by Guo et al. Yet, it has not been theoretically studied and the convergence analysis of such a method is lacking. This paper serves as a complement and somewhat an extension to Guo et al.'s paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks including adversarial attack and model training. We demonstrate that, somewhat surprisingly, LinBP can lead to faster convergence in these tasks in the same hyper-parameter settings, compared to BP. We confirm our theoretical results with extensive experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2017

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Deep neural networks (DNNs) are one of the most prominent technologies o...
research
12/07/2020

Backpropagating Linearly Improves Transferability of Adversarial Examples

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
09/30/2020

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense ...
research
11/18/2021

A Review of Adversarial Attack and Defense for Classification Methods

Despite the efficiency and scalability of machine learning systems, rece...
research
03/26/2018

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples

Understanding and characterizing the subspaces of adversarial examples a...
research
05/16/2016

Alternating optimization method based on nonnegative matrix factorizations for deep neural networks

The backpropagation algorithm for calculating gradients has been widely ...
research
06/01/2022

A Theoretical Framework for Inference Learning

Backpropagation (BP) is the most successful and widely used algorithm in...

Please sign up or login with your details

Forgot password? Click here to reset