Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

10/31/2019
by   Zuxuan Wu, et al.
73

We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset