Connecting Lyapunov Control Theory to Adversarial Attacks

07/17/2019
by   Arash Rahnama, et al.
0

Significant work is being done to develop the math and tools necessary to build provable defenses, or at least bounds, against adversarial attacks of neural networks. In this work, we argue that tools from control theory could be leveraged to aid in defending against such attacks. We do this by example, building a provable defense against a weaker adversary. This is done so we can focus on the mechanisms of control theory, and illuminate its intrinsic value.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
research
02/27/2020

Defense-PointNet: Protecting PointNet Against Adversarial Attacks

Despite remarkable performance across a broad range of tasks, neural net...
research
09/23/2020

A Partial Break of the Honeypots Defense to Catch Adversarial Attacks

A recent defense proposes to inject "honeypots" into neural networks in ...
research
06/15/2018

Non-Negative Networks Against Adversarial Attacks

Adversarial attacks against Neural Networks are a problem of considerabl...
research
04/03/2019

Securing State Estimation Under Sensor and Actuator Attacks: Theory and Design

This paper discusses the problem of estimating the state of a linear tim...
research
09/13/2021

The mathematics of adversarial attacks in AI – Why deep learning is unstable despite the existence of stable neural networks

The unprecedented success of deep learning (DL) makes it unchallenged wh...
research
01/03/2022

Actor-Critic Network for Q A in an Adversarial Environment

Significant work has been placed in the Q A NLP space to build models ...

Please sign up or login with your details

Forgot password? Click here to reset