NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of Models

08/07/2021
by   Hang Xu, et al.
0

Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks. The common practice of fine-tuning is to adopt a default hyperparameter setting with a fixed pre-trained model, while both of them are not optimized for specific tasks and time constraints. Moreover, in cloud computing or GPU clusters where the tasks arrive sequentially in a stream, faster online fine-tuning is a more desired and realistic strategy for saving money, energy consumption, and CO2 emission. In this paper, we propose a joint Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning upon the request of users. Specifically, NASOA first adopts an offline NAS to identify a group of training-efficient networks to form a pretrained model zoo. We propose a novel joint block and macro-level search space to enable a flexible and efficient search. Then, by estimating fine-tuning performance via an adaptive model by accumulating experience from the past tasks, an online schedule generator is proposed to pick up the most suitable model and generate a personalized training regime with respect to each desired task in a one-shot fashion. The resulting model zoo is more training efficient than SOTA models, e.g. 6x faster than RegNetY-16GF, and 1.7x faster than EfficientNetB3. Experiments on multiple datasets also show that NASOA achieves much better fine-tuning results, i.e. improving around 2.1 performance in RegNet series under various constraints and tasks; 40x faster compared to the BOHB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2020

Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning

Fine-tuning pre-trained generative language models to down-stream langua...
research
10/11/2022

A Kernel-Based View of Language Model Fine-Tuning

It has become standard to solve NLP tasks by fine-tuning pre-trained lan...
research
09/15/2023

SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels

Pre-trained vision transformers have strong representation benefits to v...
research
02/28/2019

Scaling Matters in Deep Structured-Prediction Models

Deep structured-prediction energy-based models combine the expressive po...
research
06/15/2023

Neural Fine-Tuning Search for Few-Shot Learning

In few-shot recognition, a classifier that has been trained on one set o...
research
01/11/2022

Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics

Efficient model selection for identifying a suitable pre-trained neural ...
research
01/15/2023

Improving Reliability of Fine-tuning with Block-wise Optimisation

Finetuning can be used to tackle domain-specific tasks by transferring k...

Please sign up or login with your details

Forgot password? Click here to reset