DeepAI AI Chat
Log In Sign Up

Feedback Learning for Improving the Robustness of Neural Networks

09/12/2019
by   Chang Song, et al.
Duke University
0

Recent research studies revealed that neural networks are vulnerable to adversarial attacks. State-of-the-art defensive techniques add various adversarial examples in training to improve models' adversarial robustness. However, these methods are not universal and can't defend unknown or non-adversarial evasion attacks. In this paper, we analyze the model robustness in the decision space. A feedback learning method is then proposed, to understand how well a model learns and to facilitate the retraining process of remedying the defects. The evaluations according to a set of distance-based criteria show that our method can significantly improve models' accuracy and robustness against different types of evasion attacks. Moreover, we observe the existence of inter-class inequality and propose to compensate it by changing the proportions of examples generated in different classes.

READ FULL TEXT
09/16/2022

Robust Ensemble Morph Detection with Domain Generalization

Although a substantial amount of studies is dedicated to morph detection...
06/19/2017

Towards Deep Learning Models Resistant to Adversarial Attacks

Recent work has demonstrated that neural networks are vulnerable to adve...
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
10/26/2020

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

Recently, convolutional neural networks (CNNs) have made significant adv...
06/19/2020

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...
01/25/2019

Improving Adversarial Robustness via Promoting Ensemble Diversity

Though deep neural networks have achieved significant progress on variou...