How Adversarial Robustness Transfers from Pre-training to Downstream Tasks

08/07/2022
by   Laura Fee Nern, et al.
0

Given the rise of large-scale training regimes, adapting pre-trained models to a wide range of downstream tasks has become a standard approach in machine learning. While large benefits in empirical performance have been observed, it is not yet well understood how robustness properties transfer from a pre-trained model to a downstream task. We prove that the robustness of a predictor on downstream tasks can be bound by the robustness of its underlying representation, irrespective of the pre-training protocol. Taken together, our results precisely characterize what is required of the representation function for reliable performance upon deployment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset