LQF: Linear Quadratic Fine-Tuning

12/21/2020
by   Alessandro Achille, et al.
8

Classifiers that are linear in their parameters, and trained by optimizing a convex loss function, have predictable behavior with respect to changes in the training data, initial conditions, and optimization. Such desirable properties are absent in deep neural networks (DNNs), typically trained by non-linear fine-tuning of a pre-trained model. Previous attempts to linearize DNNs have led to interesting theoretical insights, but have not impacted the practice due to the substantial performance gap compared to standard non-linear optimization. We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning on most of real-world image classification tasks tested, thus enjoying the interpretability of linear models without incurring punishing losses in performance. LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification: Leaky-ReLU instead of ReLU, mean squared loss instead of cross-entropy, and pre-conditioning using Kronecker factorization. None of these changes in isolation is sufficient to approach the performance of non-linear fine-tuning. When used in combination, they allow us to reach comparable performance, and even superior in the low-data regime, while enjoying the simplicity, robustness and interpretability of linear-quadratic optimization.

READ FULL TEXT
research
12/15/2021

Applying SoftTriple Loss for Supervised Language Model Fine Tuning

We introduce a new loss function TripleEntropy, to improve classificatio...
research
06/21/2021

On fine-tuning of Autoencoders for Fuzzy rule classifiers

Recent discoveries in Deep Neural Networks are allowing researchers to t...
research
07/16/2023

Tangent Transformers for Composition, Privacy and Removal

We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tun...
research
03/20/2023

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Recent years have seen the ever-increasing importance of pre-trained mod...
research
03/07/2023

Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models

We propose InCA, a lightweight method for transfer learning that cross-a...
research
10/01/2022

Pre-trained Speech Representations as Feature Extractors for Speech Quality Assessment in Online Conferencing Applications

Speech quality in online conferencing applications is typically assessed...
research
05/03/2020

A Causal View on Robustness of Neural Networks

We present a causal view on the robustness of neural networks against in...

Please sign up or login with your details

Forgot password? Click here to reset