Learning to Learn Neural Networks

10/19/2016
by   Tom Bosc, et al.
0

Meta-learning consists in learning learning algorithms. We use a Long Short Term Memory (LSTM) based network to learn to compute on-line updates of the parameters of another neural network. These parameters are stored in the cell state of the LSTM. Our framework allows to compare learned algorithms to hand-made algorithms within the traditional train and test methodology. In an experiment, we learn a learning algorithm for a one-hidden layer Multi-Layer Perceptron (MLP) on non-linearly separable datasets. The learned algorithm is able to update parameters of both layers and generalise well on similar datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2017

Simplified Long Short-term Memory Recurrent Neural Networks: part II

This is part II of three-part work. Here, we present a second set of int...
research
06/11/2020

Recurrent Neural Networks for Handover Management in Next-Generation Self-Organized Networks

In this paper, we discuss a handover management scheme for Next Generati...
research
10/15/2018

Deep Photovoltaic Nowcasting

Predicting the short-term power output of a photovoltaic panel is an imp...
research
12/31/2018

Predicting Aircraft Trajectories: A Deep Generative Convolutional Recurrent Neural Networks Approach

Reliable 4D aircraft trajectory prediction, whether in a real-time setti...
research
07/08/2018

Separability is not the best goal for machine learning

Neural networks use their hidden layers to transform input data into lin...
research
11/05/2017

Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning

Long Short-Term Memory (LSTM) is a popular approach to boosting the abil...
research
12/04/2021

Predicting Bandwidth Utilization on Network Links Using Machine Learning

Predicting the bandwidth utilization on network links can be extremely u...

Please sign up or login with your details

Forgot password? Click here to reset