DeepAI AI Chat
Log In Sign Up

Feedback Learning for Improving the Robustness of Neural Networks

by   Chang Song, et al.
Duke University

Recent research studies revealed that neural networks are vulnerable to adversarial attacks. State-of-the-art defensive techniques add various adversarial examples in training to improve models' adversarial robustness. However, these methods are not universal and can't defend unknown or non-adversarial evasion attacks. In this paper, we analyze the model robustness in the decision space. A feedback learning method is then proposed, to understand how well a model learns and to facilitate the retraining process of remedying the defects. The evaluations according to a set of distance-based criteria show that our method can significantly improve models' accuracy and robustness against different types of evasion attacks. Moreover, we observe the existence of inter-class inequality and propose to compensate it by changing the proportions of examples generated in different classes.


Robust Ensemble Morph Detection with Domain Generalization

Although a substantial amount of studies is dedicated to morph detection...

Towards Deep Learning Models Resistant to Adversarial Attacks

Recent work has demonstrated that neural networks are vulnerable to adve...

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

Recently, convolutional neural networks (CNNs) have made significant adv...

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...

Improving Adversarial Robustness via Promoting Ensemble Diversity

Though deep neural networks have achieved significant progress on variou...