-
Modeling and Optimization Trade-off in Meta-learning
By searching for shared inductive biases across tasks, meta-learning pro...
read it
-
Weighted Meta-Learning
Meta-learning leverages related source tasks to learn an initialization ...
read it
-
Meta-Learning with Warped Gradient Descent
A versatile and effective approach to meta-learning is to infer a gradie...
read it
-
Torchmeta: A Meta-Learning library for PyTorch
The constant introduction of standardized benchmarks in the literature h...
read it
-
Probabilistic Active Meta-Learning
Data-efficient learning algorithms are essential in many practical appli...
read it
-
Far-HO: A Bilevel Programming Package for Hyperparameter Optimization and Meta-Learning
In (Franceschi et al., 2018) we proposed a unified mathematical framewor...
read it
-
ML-misfit: Learning a robust misfit function for full-waveform inversion using machine learning
Most of the available advanced misfit functions for full waveform invers...
read it
Meta Learning Backpropagation And Improving It
Many concepts have been proposed for meta learning with neural networks (NNs), e.g., NNs that learn to control fast weights, hyper networks, learned learning rules, and meta recurrent neural networks (Meta RNNs). Our Variable Shared Meta Learning (VS-ML) unifies the above and demonstrates that simple weight-sharing and sparsity in an NN is sufficient to express powerful learning algorithms. A simple implementation of VS-ML called Variable Shared Meta RNN allows for implementing the backpropagation learning algorithm solely by running an RNN in forward-mode. It can even meta-learn new learning algorithms that improve upon backpropagation, generalizing to different datasets without explicit gradient calculation.
READ FULL TEXT
Comments
There are no comments yet.