Towards Robust Deep Learning with Ensemble Networks and Noisy Layers

07/03/2020
by   Yuting Liang, et al.
0

In this paper we provide an approach for deep learning that protects against adversarial examples in image classification-type networks. The approach relies on two mechanisms:1) a mechanism that increases robustness at the expense of accuracy, and, 2) a mechanism that improves accuracy but does not always increase robustness. We show that an approach combining the two mechanisms can provide protection against adversarial examples while retaining accuracy. We formulate potential attacks on our approach and provide experimental results to demonstrate the effectiveness of our approach.

READ FULL TEXT

page 9

page 10

page 11

page 12

research
05/01/2019

Dropping Pixels for Adversarial Robustness

Deep neural networks are vulnerable against adversarial examples. In thi...
research
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
research
04/26/2022

Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks

Malicious intelligent algorithms greatly threaten the security of social...
research
06/11/2022

Defending Adversarial Examples by Negative Correlation Ensemble

The security issues in DNNs, such as adversarial examples, have attracte...
research
09/28/2020

Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence

This paper aims to provide a thorough study on the effectiveness of the ...
research
05/28/2019

Brain-inspired reverse adversarial examples

A human does not have to see all elephants to recognize an animal as an ...
research
10/26/2022

Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks

Convolutional neural network-based medical image classifiers have been s...

Please sign up or login with your details

Forgot password? Click here to reset