Trainable Projected Gradient Method for Robust Fine-tuning

03/19/2023
by   Junjiao Tian, et al.
0

Recent studies on transfer learning have shown that selectively fine-tuning a subset of layers or customizing different learning rates for each layer can greatly improve robustness to out-of-distribution (OOD) data and retain generalization capability in the pre-trained models. However, most of these methods employ manually crafted heuristics or expensive hyper-parameter searches, which prevent them from scaling up to large datasets and neural networks. To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization. This is motivated by formulating fine-tuning as a bi-level constrained optimization problem. Specifically, TPGM maintains a set of projection radii, i.e., distance constraints between the fine-tuned model and the pre-trained model, for each layer, and enforces them through weight projections. To learn the constraints, we propose a bi-level optimization to automatically learn the best set of projection radii in an end-to-end manner. Theoretically, we show that the bi-level optimization formulation is the key to learning different constraints for each layer. Empirically, with little hyper-parameter search cost, TPGM outperforms existing fine-tuning methods in OOD performance while matching the best in-distribution (ID) performance. For example, when fine-tuned on DomainNet-Real and ImageNet, compared to vanilla fine-tuning, TPGM shows 22% and 10% relative OOD improvement respectively on their sketch counterparts. Code is available at <https://github.com/PotatoTian/TPGM>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2022

Scaling Shifting Your Features: A New Baseline for Efficient Model Tuning

Existing fine-tuning methods either tune all parameters of the pre-train...
research
11/08/2021

Improved Regularization and Robustness for Fine-tuning in Neural Networks

A widely used algorithm for transfer learning is fine-tuning, where a pr...
research
04/01/2020

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Data poisoning–the process by which an attacker takes control of a model...
research
06/15/2022

Sparse Structure Search for Parameter-Efficient Tuning

Adapting large pre-trained models (PTMs) through fine-tuning imposes pro...
research
03/20/2023

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Recent years have seen the ever-increasing importance of pre-trained mod...
research
11/28/2022

On the Effectiveness of Parameter-Efficient Fine-Tuning

Fine-tuning pre-trained models has been ubiquitously proven to be effect...
research
03/13/2023

OTOV2: Automatic, Generic, User-Friendly

The existing model compression methods via structured pruning typically ...

Please sign up or login with your details

Forgot password? Click here to reset