Non-Negative Networks Against Adversarial Attacks

06/15/2018
by   William Fleshman, et al.
0

Adversarial attacks against Neural Networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show how non-negativity can be leveraged to reduce an attacker's ability to perform targeted misclassification attacks in other domains such as image processing.

READ FULL TEXT
research
06/23/2018

Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions

There is a growing body of literature showing that deep neural networks ...
research
02/15/2022

StratDef: a strategic defense against adversarial attacks in malware detection

Over the years, most research towards defenses against adversarial attac...
research
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
research
07/17/2019

Connecting Lyapunov Control Theory to Adversarial Attacks

Significant work is being done to develop the math and tools necessary t...
research
04/19/2020

The Attacker's Perspective on Automatic Speaker Verification: An Overview

Security of automatic speaker verification (ASV) systems is compromised ...
research
02/19/2020

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

Designing effective defense against adversarial attacks is a crucial top...
research
10/06/2021

amsqr at MLSEC-2021: Thwarting Adversarial Malware Evasion with a Defense-in-Depth

This paper describes the author's participation in the 3rd edition of th...

Please sign up or login with your details

Forgot password? Click here to reset