Logic-inspired Deep Neural Networks

11/20/2019
by   Minh Le, et al.
0

Deep neural networks have achieved impressive performance and become de-facto standard in many tasks. However, phenomena such as adversarial examples and fooling examples hint that the generalization they make is flawed. We argue that the problem roots in their distributed and connected nature and propose remedies inspired by propositional logic. Our experiments show that the proposed models are more local and better at resisting fooling and adversarial examples. By means of an ablation analysis, we reveal insights into adversarial examples and suggest a new hypothesis on their origins.

READ FULL TEXT

page 3

page 5

research
12/19/2017

Adversarial Examples: Attacks and Defenses for Deep Learning

With rapid progress and great successes in a wide spectrum of applicatio...
research
01/03/2018

Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space

Recently, Neural networks have seen a huge surge in its adoption due to ...
research
03/20/2020

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning

Deep learning is currently the most widespread and successful technology...
research
01/02/2018

High Dimensional Spaces, Deep Learning and Adversarial Examples

In this paper, we analyze deep learning from a mathematical point of vie...
research
05/13/2019

Adversarial Examples for Electrocardiograms

Among all physiological signals, electrocardiogram (ECG) has seen some o...
research
04/28/2017

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which ...
research
05/14/2019

Interpretable Deep Neural Networks for Patient Mortality Prediction: A Consensus-based Approach

Deep neural networks have achieved remarkable success in challenging tas...

Please sign up or login with your details

Forgot password? Click here to reset