Generating meta-learning tasks to evolve parametric loss for classification learning

11/20/2021
by   Zhaoyang Hai, et al.
0

The field of meta-learning has seen a dramatic rise in interest in recent years. In existing meta-learning approaches, learning tasks for training meta-models are usually collected from public datasets, which brings the difficulty of obtaining a sufficient number of meta-learning tasks with a large amount of training data. In this paper, we propose a meta-learning approach based on randomly generated meta-learning tasks to obtain a parametric loss for classification learning based on big data. The loss is represented by a deep neural network, called meta-loss network (MLN). To train the MLN, we construct a large number of classification learning tasks through randomly generating training data, validation data, and corresponding ground-truth linear classifier. Our approach has two advantages. First, sufficient meta-learning tasks with large number of training data can be obtained easily. Second, the ground-truth classifier is given, so that the difference between the learned classifier and the ground-truth model can be measured to reflect the performance of MLN more precisely than validation accuracy. Based on this difference, we apply the evolutionary strategy algorithm to find out the optimal MLN. The resultant MLN not only leads to satisfactory learning effects on generated linear classifier learning tasks for testing, but also behaves very well on generated nonlinear classifier learning tasks and various public classification tasks. Our MLN stably surpass cross-entropy (CE) and mean square error (MSE) in testing accuracy and generalization ability. These results illustrate the possibility of achieving satisfactory meta-learning effects using generated learning tasks.

READ FULL TEXT

page 2

page 5

research
03/15/2021

Evolving parametrized Loss for Image Classification Learning on Small Datasets

This paper proposes a meta-learning approach to evolving a parametrized ...
research
01/26/2023

Invariant Meta Learning for Out-of-Distribution Generalization

Modern deep learning techniques have illustrated their excellent capabil...
research
02/11/2023

A large parametrized space of meta-reinforcement learning tasks

We describe a parametrized space for simple meta-reinforcement-learning ...
research
05/13/2023

DAC-MR: Data Augmentation Consistency Based Meta-Regularization for Meta-Learning

Meta learning recently has been heavily researched and helped advance th...
research
04/09/2023

Theoretical Characterization of the Generalization Performance of Overfitted Meta-Learning

Meta-learning has arisen as a successful method for improving training p...
research
10/14/2022

Meta Transferring for Deblurring

Most previous deblurring methods were built with a generic model trained...
research
10/12/2020

How Important is the Train-Validation Split in Meta-Learning?

Meta-learning aims to perform fast adaptation on a new task through lear...

Please sign up or login with your details

Forgot password? Click here to reset