How saccadic vision might help with theinterpretability of deep networks

05/27/2021
by   Iana Sereda, et al.
0

We describe how some problems (interpretability,lack of object-orientedness) of modern deep networks potentiallycould be solved by adapting a biologically plausible saccadicmechanism of perception. A sketch of such a saccadic visionmodel is proposed. Proof of concept experimental results areprovided to support the proposed approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2019

Geometrization of deep networks for the interpretability of deep learning systems

How to understand deep learning systems remains an open problem. In this...
research
07/12/2019

Deep network as memory space: complexity, generalization, disentangled representation and interpretability

By bridging deep networks and physics, the programme of geometrization o...
research
10/29/2015

Spiking Deep Networks with LIF Neurons

We train spiking deep networks using leaky integrate-and-fire (LIF) neur...
research
02/20/2020

An Elementary Approach to Convergence Guarantees of Optimization Algorithms for Deep Networks

We present an approach to obtain convergence guarantees of optimization ...
research
12/14/2022

Event-based YOLO Object Detection: Proof of Concept for Forward Perception System

Neuromorphic vision or event vision is an advanced vision technology, wh...
research
12/20/2013

Neuronal Synchrony in Complex-Valued Deep Networks

Deep learning has recently led to great successes in tasks such as image...
research
10/11/2017

Lung Cancer Screening Using Adaptive Memory-Augmented Recurrent Networks

In this paper, we investigate the effectiveness of deep learning techniq...

Please sign up or login with your details

Forgot password? Click here to reset