Dual Graphs of Polyhedral Decompositions for the Detection of Adversarial Attacks

11/23/2022
by   Huma Jamil, et al.
0

Previous work has shown that a neural network with the rectified linear unit (ReLU) activation function leads to a convex polyhedral decomposition of the input space. These decompositions can be represented by a dual graph with vertices corresponding to polyhedra and edges corresponding to polyhedra sharing a facet, which is a subgraph of a Hamming graph. This paper illustrates how one can utilize the dual graph to detect and analyze adversarial attacks in the context of digital images. When an image passes through a network containing ReLU nodes, the firing or non-firing at a node can be encoded as a bit (1 for ReLU activation, 0 for ReLU non-activation). The sequence of all bit activations identifies the image with a bit vector, which identifies it with a polyhedron in the decomposition and, in turn, identifies it with a vertex in the dual graph. We identify ReLU bits that are discriminators between non-adversarial and adversarial images and examine how well collections of these discriminators can ensemble vote to build an adversarial image detector. Specifically, we examine the similarities and differences of ReLU bit vectors for adversarial images, and their non-adversarial counterparts, using a pre-trained ResNet-50 architecture. While this paper focuses on adversarial digital images, ResNet-50 architecture, and the ReLU activation function, our methods extend to other network architectures, activation functions, and types of datasets.

READ FULL TEXT
research
05/02/2023

Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection

Researchers typically investigate neural network representations by exam...
research
06/30/2023

ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homolog

A ReLU neural network leads to a finite polyhedral decomposition of inpu...
research
12/23/2019

Learn-able parameter guided Activation Functions

In this paper, we explore the concept of adding learn-able slope and mea...
research
04/08/2021

Learning specialized activation functions with the Piecewise Linear Unit

The choice of activation functions is crucial for modern deep neural net...
research
11/15/2018

Mathematical Analysis of Adversarial Attacks

In this paper, we analyze efficacy of the fast gradient sign method (FGS...
research
11/30/2021

Approximate Spectral Decomposition of Fisher Information Matrix for Simple ReLU Networks

We investigate the Fisher information matrix (FIM) of one hidden layer n...
research
05/20/2020

ReLU Code Space: A Basis for Rating Network Quality Besides Accuracy

We propose a new metric space of ReLU activation codes equipped with a t...

Please sign up or login with your details

Forgot password? Click here to reset