Learning from Mistakes based on Class Weighting with Application to Neural Architecture Search

12/01/2021
by   Jay Gala, et al.
0

Learning from mistakes is an effective learning approach widely used in human learning, where a learner pays greater focus on mistakes to circumvent them in the future. It aids in improving the overall learning outcomes. In this work, we aim to investigate how effectively this exceptional learning ability can be used to improve machine learning models as well. We propose a simple and effective multi-level optimization framework called learning from mistakes (LFM), inspired by mistake-driven learning to train better machine learning models. Our LFM framework consists of a formulation involving three learning stages. The primary objective is to train a model to perform effectively on target tasks by using a re-weighting technique to prevent similar mistakes in the future. In this formulation, we learn the class weights by minimizing the validation loss of the model and re-train the model with the synthetic data from the image generator weighted by class-wise performance and real data. We apply our LFM framework for differential architecture search methods on image classification datasets such as CIFAR and ImageNet, where the results demonstrate the effectiveness of our proposed strategy.

READ FULL TEXT

page 9

page 10

research
11/11/2021

Learning from Mistakes – A Framework for Neural Architecture Search

Learning from one's mistakes is an effective human learning technique wh...
research
03/12/2021

Interleaving Learning, with Application to Neural Architecture Search

Interleaving learning is a human learning technique where a learner inte...
research
12/23/2020

Small-Group Learning, with Application to Neural Architecture Search

Small-group learning is a broadly used methodology in human learning and...
research
11/30/2020

Learning by Passing Tests, with Application to Neural Architecture Search

Learning through tests is a broadly used methodology in human learning a...
research
11/30/2021

Improving Differentiable Architecture Search with a Generative Model

In differentiable neural architecture search (NAS) algorithms like DARTS...
research
09/14/2019

Neural Architecture Search for Class-incremental Learning

In class-incremental learning, a model learns continuously from a sequen...
research
05/25/2022

Concurrent Neural Tree and Data Preprocessing AutoML for Image Classification

Deep Neural Networks (DNN's) are a widely-used solution for a variety of...

Please sign up or login with your details

Forgot password? Click here to reset