Probing the Purview of Neural Networks via Gradient Analysis

04/06/2023
by   Jinsol Lee, et al.
0

We analyze the data-dependent capacity of neural networks and assess anomalies in inputs from the perspective of networks during inference. The notion of data-dependent capacity allows for analyzing the knowledge base of a model populated by learned features from training data. We define purview as the additional capacity necessary to characterize inference samples that differ from the training data. To probe the purview of a network, we utilize gradients to measure the amount of change required for the model to characterize the given inputs more accurately. To eliminate the dependency on ground-truth labels in generating gradients, we introduce confounding labels that are formulated by combining multiple categorical labels. We demonstrate that our gradient-based approach can effectively differentiate inputs that cannot be accurately represented with learned features. We utilize our approach in applications of detecting anomalous inputs, including out-of-distribution, adversarial, and corrupted samples. Our approach requires no hyperparameter tuning or additional data processing and outperforms state-of-the-art methods by up to 2.7

READ FULL TEXT

page 2

page 3

page 6

page 8

page 9

page 12

page 17

page 18

research
06/16/2022

Gradient-Based Adversarial and Out-of-Distribution Detection

We propose to utilize gradients for detecting adversarial and out-of-dis...
research
08/18/2020

Gradients as a Measure of Uncertainty in Neural Networks

Despite tremendous success of modern neural networks, they are known to ...
research
01/08/2020

iDLG: Improved Deep Leakage from Gradients

It is widely believed that sharing gradients will not leak private train...
research
08/27/2019

Distorted Representation Space Characterization Through Backpropagated Gradients

In this paper, we utilize weight gradients from backpropagation to chara...
research
10/31/2021

Revealing and Protecting Labels in Distributed Training

Distributed learning paradigms such as federated learning often involve ...
research
03/17/2023

Detecting Out-of-distribution Examples via Class-conditional Impressions Reappearing

Out-of-distribution (OOD) detection aims at enhancing standard deep neur...
research
09/08/2019

Fine Grained Dataflow Tracking with Proximal Gradients

Dataflow tracking with Dynamic Taint Analysis (DTA) is an important meth...

Please sign up or login with your details

Forgot password? Click here to reset