Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings

by   Yuhao Mao, et al.

One intriguing property of adversarial attacks is their "transferability" – an adversarial example crafted with respect to one deep neural network (DNN) model is often found effective against other DNNs as well. Intensive research has been conducted on this phenomenon under simplistic controlled conditions. Yet, thus far, there is still a lack of comprehensive understanding about transferability-based attacks ("transfer attacks") in real-world environments. To bridge this critical gap, we conduct the first large-scale systematic empirical study of transfer attacks against major cloud-based MLaaS platforms, taking the components of a real transfer attack into account. The study leads to a number of interesting findings which are inconsistent to the existing ones, including: (1) Simple surrogates do not necessarily improve real transfer attacks. (2) No dominant surrogate architecture is found in real transfer attacks. (3) It is the gap between posterior (output of the softmax layer) rather than the gap between logit (so-called κ value) that increases transferability. Moreover, by comparing with prior works, we demonstrate that transfer attacks possess many previously unknown properties in real-world environments, such as (1) Model similarity is not a well-defined concept. (2) L_2 norm of perturbation can generate high transferability without usage of gradient and is a more powerful source than L_∞ norm. We believe this work sheds light on the vulnerabilities of popular MLaaS platforms and points to a few promising research directions.


page 1

page 11


Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...

Sequential Transfer Machine Learning in Networks: Measuring the Impact of Data and Neural Net Similarity on Transferability

In networks of independent entities that face similar predictive tasks, ...

Rethinking the Backward Propagation for Adversarial Transferability

Transfer-based attacks generate adversarial examples on the surrogate mo...

Query-Free Adversarial Transfer via Undertrained Surrogates

Deep neural networks have been shown to be highly vulnerable to adversar...

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness

Adversarial Transferability is an intriguing property of adversarial exa...

Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems

Safe deployment of self-driving cars (SDC) necessitates thorough simulat...

Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

Evasion attacks are a threat to machine learning models, where adversari...

Please sign up or login with your details

Forgot password? Click here to reset