How Adversarial Robustness Transfers from Pre-training to Downstream Tasks

08/07/2022
by   Laura Fee Nern, et al.
0

Given the rise of large-scale training regimes, adapting pre-trained models to a wide range of downstream tasks has become a standard approach in machine learning. While large benefits in empirical performance have been observed, it is not yet well understood how robustness properties transfer from a pre-trained model to a downstream task. We prove that the robustness of a predictor on downstream tasks can be bound by the robustness of its underlying representation, irrespective of the pre-training protocol. Taken together, our results precisely characterize what is required of the representation function for reliable performance upon deployment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2023

Rethinking Visual Prompt Learning as Masked Visual Token Modeling

Prompt learning has achieved great success in efficiently exploiting lar...
research
06/21/2022

Insights into Pre-training via Simpler Synthetic Tasks

Pre-training produces representations that are effective for a wide rang...
research
06/21/2023

Task-Robust Pre-Training for Worst-Case Downstream Adaptation

Pre-training has achieved remarkable success when transferred to downstr...
research
10/05/2021

Exploring the Limits of Large Scale Pre-training

Recent developments in large-scale machine learning suggest that by scal...
research
05/31/2023

Diffused Redundancy in Pre-trained Representations

Representations learned by pre-training a neural network on a large data...
research
09/16/2023

The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated

Pre-trained language models trained on large-scale data have learned ser...
research
05/31/2023

Representation Reliability and Its Impact on Downstream Tasks

Self-supervised pre-trained models extract general-purpose representatio...

Please sign up or login with your details

Forgot password? Click here to reset