DeepAI AI Chat
Log In Sign Up

Guarantees on learning depth-2 neural networks under a data-poisoning attack

05/04/2020
by   Anirbit Mukherjee, et al.
Johns Hopkins University
0

In recent times many state-of-the-art machine learning models have been shown to be fragile to adversarial attacks. In this work we attempt to build our theoretical understanding of adversarially robust learning with neural nets. We demonstrate a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels. We prove (nearly optimal) trade-offs among the magnitude of the adversarial attack, the accuracy and the confidence achieved by the proposed algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/31/2022

Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons

We identify fragile and robust neurons of deep learning architectures us...
10/20/2022

Chaos Theory and Adversarial Robustness

Neural Networks, being susceptible to adversarial attacks, should face a...
10/01/2018

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

We present a new algorithm to train a robust neural network against adve...
02/12/2021

Reinforcement Learning For Data Poisoning on Graph Neural Networks

Adversarial Machine Learning has emerged as a substantial subfield of Co...
05/08/2020

A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods

In this work we demonstrate provable guarantees on the training of depth...