Think Again Networks and the Delta Loss

04/26/2019
by   Alexandre Salle, et al.
0

This short paper introduces an abstraction called Think Again Networks (ThinkNet) which can be applied to any state-dependent function (such as a recurrent neural network).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2019

Think Again Networks, the Delta Loss, and an Application in Language Modeling

This short paper introduces an abstraction called Think Again Networks (...
research
10/31/2020

On the rate of convergence of a deep recurrent neural network estimate in a regression problem with dependent data

A regression problem with dependent data is considered. Regularity assum...
research
05/24/2016

Sequential Neural Models with Stochastic Layers

How can we efficiently propagate uncertainty in a latent state represent...
research
02/16/2015

DRAW: A Recurrent Neural Network For Image Generation

This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural ...
research
07/06/2006

Modelling the Probability Density of Markov Sources

This paper introduces an objective function that seeks to minimise the a...
research
02/03/2016

Single-Solution Hypervolume Maximization and its use for Improving Generalization of Neural Networks

This paper introduces the hypervolume maximization with a single solutio...
research
08/16/2016

Authorship clustering using multi-headed recurrent neural networks

A recurrent neural network that has been trained to separately model the...

Please sign up or login with your details

Forgot password? Click here to reset