Learning with hidden variables

06/01/2015
by   Yasser Roudi, et al.
0

Learning and inferring features that generate sensory input is a task continuously performed by cortex. In recent years, novel algorithms and learning rules have been proposed that allow neural network models to learn such features from natural images, written text, audio signals, etc. These networks usually involve deep architectures with many layers of hidden neurons. Here we review recent advancements in this area emphasizing, amongst other things, the processing of dynamical inputs by networks with hidden nodes and the role of single neuron models. These points and the questions they arise can provide conceptual advancements in understanding of learning in the cortex and the relationship between machine learning approaches to learning with hidden nodes and those in cortical circuits.

READ FULL TEXT
research
02/02/2020

Non-linear Neurons with Human-like Apical Dendrite Activations

In order to classify linearly non-separable data, neurons are typically ...
research
02/15/2019

Efficient Deep Learning of GMMs

We show that a collection of Gaussian mixture models (GMMs) in R^n can b...
research
02/18/2022

Letters of the Alphabet: Discovering Natural Feature Sets

Deep learning networks find intricate features in large datasets using t...
research
09/27/2022

Formal Conceptual Views in Neural Networks

Explaining neural network models is a challenging task that remains unso...
research
10/05/2021

Joint inference of multiple graphs with hidden variables from stationary graph signals

Learning graphs from sets of nodal observations represents a prominent p...
research
06/24/2019

Lifelong Learning Starting From Zero

We present a deep neural-network model for lifelong learning inspired by...
research
11/06/2012

Handwritten digit recognition by bio-inspired hierarchical networks

The human brain processes information showing learning and prediction ab...

Please sign up or login with your details

Forgot password? Click here to reset