Does Optimal Source Task Performance Imply Optimal Pre-training for a Target Task?

06/21/2021
by   Steven Gutstein, et al.
0

Pre-trained deep nets are commonly used to improve accuracies and training times for neural nets. It is generally assumed that pre-training a net for optimal source task performance best prepares it to learn an arbitrary target task. This is generally not true. Stopping source task training, prior to optimal performance, can create a pre-trained net better suited for learning a new task. We performed several experiments demonstrating this effect, as well as the influence of amount of training and of learning rate. Additionally, we show that this reflects a general loss of learning ability that even extends to relearning the source task

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2022

Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing

Self-supervised learning (SSL), especially contrastive methods, has rais...
research
11/20/2020

Efficient Conditional Pre-training for Transfer Learning

Almost all the state-of-the-art neural networks for computer vision task...
research
01/17/2013

Knowledge Matters: Importance of Prior Information for Optimization

We explore the effect of introducing prior information into the intermed...
research
01/28/2019

Using Pre-Training Can Improve Model Robustness and Uncertainty

Tuning a pre-trained network is commonly thought to improve data efficie...
research
08/03/2023

Curricular Transfer Learning for Sentence Encoded Tasks

Fine-tuning language models in a downstream task is the standard approac...
research
11/13/2022

Build generally reusable agent-environment interaction models

This paper tackles the problem of how to pre-train a model and make it g...
research
06/25/2020

Cascading Modular U-Nets for Document Image Binarization

In recent years, U-Net has achieved good results in various image proces...

Please sign up or login with your details

Forgot password? Click here to reset