DeepAI AI Chat
Log In Sign Up

Holographic Neural Architectures

by   Tariq Daouda, et al.
Université de Montréal

Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single dimension. We show that HNAs can be used to make generative networks, state-of-the-art regression models and that they are inherently highly resistant to noise. Finally, we argue that because of their denoising abilities and their capacity to generalize well from very few examples, models based upon HNAs are particularly well suited for biological applications where training examples are rare or noisy.


page 3

page 4

page 5

page 6

page 7


Unsupervised State Representation Learning in Atari

State representation learning, or the ability to capture latent generati...

On the Limits of Learning Representations with Label-Based Supervision

Advances in neural network based classifiers have transformed automatic ...

Fair Interpretable Representation Learning with Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Meet You Halfway: Explaining Deep Learning Mysteries

Deep neural networks perform exceptionally well on various learning task...

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...

Deep Orthogonal Representations: Fundamental Properties and Applications

Several representation learning and, more broadly, dimensionality reduct...

Binding via Reconstruction Clustering

Disentangled distributed representations of data are desirable for machi...