Gradient representations in ReLU networks as similarity functions

10/26/2021
by   Dániel Rácz, et al.
0

Feed-forward networks can be interpreted as mappings with linear decision surfaces at the level of the last layer. We investigate how the tangent space of the network can be exploited to refine the decision in case of ReLU (Rectified Linear Unit) activations. We show that a simple Riemannian metric parametrized on the parameters of the network forms a similarity function at least as good as the original network and we suggest a sparse metric to increase the similarity gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2017

An approach to reachability analysis for feed-forward ReLU neural networks

We study the reachability problem for systems implemented as feed-forwar...
research
10/31/2017

Approximating Continuous Functions by ReLU Nets of Minimal Width

This article concerns the expressive power of depth in deep feed-forward...
research
02/12/2020

GLU Variants Improve Transformer

Gated Linear Units (arXiv:1612.08083) consist of the component-wise prod...
research
06/11/2020

Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks

Recent articles indicate that deep neural networks are efficient models ...
research
05/02/2023

Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection

Researchers typically investigate neural network representations by exam...
research
11/14/2016

Identity Matters in Deep Learning

An emerging design principle in deep learning is that each layer of a de...
research
07/17/2018

Expressive power of outer product manifolds on feed-forward neural networks

Hierarchical neural networks are exponentially more efficient than their...

Please sign up or login with your details

Forgot password? Click here to reset