Learning by Ignoring

12/28/2020
by   Xingchen Zhao, et al.
0

Learning by ignoring, which identifies less important things and excludes them from the learning process, is an effective learning technique in human learning. There has been psychological studies showing that learning to ignore certain things is a powerful tool for helping people focus. We are interested in investigating whether this powerful learning technique can be borrowed from humans to improve the learning abilities of machines. We propose a novel learning approach called learning by ignoring (LBI). Our approach automatically identifies pretraining data examples that have large domain shift from the target distribution by learning an ignoring variable for each example and excludes them from the pretraining process. We propose a three-level optimization framework to formulate LBI which involves three stages of learning: pretraining by minimizing the losses weighed by ignoring variables; finetuning; updating the ignoring variables by minimizing the validation loss. We develop an efficient algorithm to solve the LBI problem. Experiments on various datasets demonstrate the effectiveness of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2020

Learning by Self-Explanation, with Application to Neural Architecture Search

Learning by self-explanation, where students explain a learned topic to ...
research
09/22/2021

Learning by Examples Based on Multi-level Optimization

Learning by examples, which learns to solve a new problem by looking int...
research
12/23/2020

Small-Group Learning, with Application to Neural Architecture Search

Small-group learning is a broadly used methodology in human learning and...
research
08/11/2022

Differencing based Self-supervised pretraining for Scene Change Detection

Scene change detection (SCD), a crucial perception task, identifies chan...
research
08/03/2022

The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift

We study linear regression under covariate shift, where the marginal dis...
research
12/21/2021

Supervised Graph Contrastive Pretraining for Text Classification

Contrastive pretraining techniques for text classification has been larg...
research
05/08/2023

SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining

Model compression has become the de-facto approach for optimizing the ef...

Please sign up or login with your details

Forgot password? Click here to reset