Efficient Per-Example Gradient Computations

10/07/2015
by   Ian Goodfellow, et al.
0

This technical report describes an efficient technique for computing the norm of the gradient of the loss function for a neural network with respect to its parameters. This gradient norm can be computed efficiently for every example.

READ FULL TEXT

page 1

page 2

research
12/12/2019

Efficient Per-Example Gradient Computations in Convolutional Neural Networks

Deep learning frameworks leverage GPUs to perform massively-parallel com...
research
10/12/2022

On the Importance of Gradient Norm in PAC-Bayesian Bounds

Generalization bounds which assess the difference between the true risk ...
research
06/12/2019

Critical Point Finding with Newton-MR by Analogy to Computing Square Roots

Understanding of the behavior of algorithms for resolving the optimizati...
research
01/22/2023

The Backpropagation algorithm for a math student

A Deep Neural Network (DNN) is a composite function of vector-valued fun...
research
05/14/2023

ReSDF: Redistancing Implicit Surfaces using Neural Networks

This paper proposes a deep-learning-based method for recovering a signed...
research
04/06/2020

NiLBS: Neural Inverse Linear Blend Skinning

In this technical report, we investigate efficient representations of ar...
research
03/10/2022

neos: End-to-End-Optimised Summary Statistics for High Energy Physics

The advent of deep learning has yielded powerful tools to automatically ...

Please sign up or login with your details

Forgot password? Click here to reset