The Artificial Mind's Eye: Resisting Adversarials for Convolutional Neural Networks using Internal Projection

04/15/2016
by   Harm Berntsen, et al.
0

We introduce a novel artificial neural network architecture that integrates robustness to adversarial input in the network structure. The main idea of our approach is to force the network to make predictions on what the given instance of the class under consideration would look like and subsequently test those predictions. By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a "proof" of the presence of the object.

READ FULL TEXT

page 5

page 10

research
12/16/2017

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

Cortical minicolumns are considered a model of cortical organization. Th...
research
04/16/2020

A Hybrid Objective Function for Robustness of Artificial Neural Networks – Estimation of Parameters in a Mechanical System

In several studies, hybrid neural networks have proven to be more robust...
research
05/25/2020

Thermodynamics-based Artificial Neural Networks for constitutive modeling

Machine Learning methods and, in particular, Artificial Neural Networks ...
research
02/01/2019

Projection-Based 2.5D U-net Architecture for Fast Volumetric Segmentation

Convolutional neural networks are state-of-the-art for various segmentat...
research
06/04/2019

Dynamic Neural Network Decoupling

Convolutional neural networks (CNNs) have achieved a superior performanc...
research
11/03/2022

Exploring explicit coarse-grained structure in artificial neural networks

We propose to employ the hierarchical coarse-grained structure in the ar...

Please sign up or login with your details

Forgot password? Click here to reset