The vulnerability of deep neural networks (DNNs) against adversarial attacks (namely, perturbed inputs deceiving DNNs) has been found in applications spanning from image classification to speech recognition [1, 2, 3, 4, 5, 6, 7]. Early works studied adversarial examples in the digital space only. Recently, some works showed that it is possible to create adversarial perturbations on physical objects and fool DNN-based decision makers under a variety of real-world conditions [8, 9, 10, 11, 12, 13, 14, 15, 16]. However, most of the studied physical adversarial attacks encounter two limitations: a) the physical objects are usually considered being static, and b) the possible deformation of adversarial pattern attached to a moving object (e.g., due to pose change of a moving person) is commonly neglected. In this paper, we propose a new type of physical adversarial attack, adversarial T-shirt, to evade a real-time person detector when the person moves and wears the adversarial T-shirt; see the second and the third rows of Figure 1 for illustrative examples.
Most of the existing physical adversarial attacks were generated against image classifiers and object detectors. In ,
a face recognition system is fooled by a real eyeglass frame designed under a crafted adversarial pattern.
Most of the existing physical adversarial attacks were generated against image classifiers and object detectors. In
, a face recognition system is fooled by a real eyeglass frame designed under a crafted adversarial pattern. In, a stop sign is misclassified by adding black or white stickers on it against image classification system. In , an image classifier is fooled by placing a crafted sticker at the lens of a camera. In , a so-called Expectation over Transformation (EoT) framework was proposed to synthesize adversarial examples robust to a set of physical transformations such as rotation, translation, contrast, brightness, and random noise. Compared to attacking image classifiers, generating physical adversarial attacks against object detectors is more challenging since the adversary is required to mislead both bounding box detector and object classifier. A well-known success is the generation of adversarial stop sign , which deceives state-of-the-art object detectors such as YOLOv2  and Faster R-CNN .
The most relevant work to ours is , in which a person detector is fooled when the person holds a cardboard plate printed by an adversarial patch. However, such a physical attack restricts the adversarial patch to be attached to a rigid carrier (cardbolad), and is not directly applied to the design of adversarial T-shirt. We show that the attack proposed by  becomes ineffective when the adversarial patch is attached to a T-shirt (rather than a cardboard) and worn by a moving person (see the fourth row of Figure 1). At the technical side, different from  we propose a thin plate spline (TPS) based transformer to model the deformation effect of a non-rigid object, and we develop an ensemble physical attack that fools object detectors YOLOv2 and Faster R-CNN simultaneously. We highlight that the proposed adversarial T-shirt is not just a T-shirt with printed adversarial patch for clothing fashion, it is a physical adversarial wearable designed for evading person detectors in a real world.
Our work is also motivated by the importance of person detection on intelligent surveillance. DNN-based surveillance systems have significantly advanced the field of object detection [19, 20]. Efficient object detectors such as faster R-CNN , SSD , and YOLOv2  have been deployed for human detection. Thus, one may wonder whether or not there exists a security risk for intelligent surveillance systems caused by adversarial human wearables, e.g., adversarial T-shirt. However, paralyzing a person detector in the physical world requires substantially more challenges such as low resolution, pose change and occlusion.
We summarize our contributions as follows.
We develop a TPS based transformer to model the temporal deformation of adversarial T-shirt caused by pose change of a moving person. We also show its importance to ensure the effectiveness of adversarial T-shirt in the physical world.
We propose a general optimization framework for design of adversarial T-shirt in both single-detector and multiple-detector settings.
We conduct experiments in both digital and physical worlds and show that the proposed adversarial T-shirt achieves 79% and 63% attack success rates respectively when attacking YOLOv2. By contrast, the physical adversarial patch  printed on a T-shirt only achieves 27% attack success rate. Some of our results are highlighted in Figure 1.
baseline physical attack
2 Modeling Deformation of A Moving Object by Thin Plate Spline Mapping
In this section, we begin by reviewing some existing transformations required in the design of physical adversarial examples. We then elaborate on Thin Plate Spline (TPS) mapping used to model the possible deformation encountered by a moving and non-rigid object.
Let be an original image (or a video frame), and be the physical transformer. The transformed image under is given by
, the geometry and lighting transformations are studied via parametric models. Other transformations including perspective transformation, brightness adjustment, resampling (or image resizing), smoothing and saturation are considered in[23, 24]. All the existing transformations are included in our library of physical transformations. However, they are not sufficient to model the cloth deformation caused by pose change of a moving person. For example, the first and fourth rows of Figure 1 show that adversarial T-shirts designed against only existing physical transformations yield low attack success rates.
TPS transformation for cloth deformation.
A person’s movement can result in significant and constantly changing wrinkles (aka deformations) in her clothes. This makes it challenging to develop adversarial T-shirt effectively in the real world. To circumvent this challenge, we employ TPS mapping  to model the cloth deformation caused by human body movement. TPS has been widely used as the non-rigid transformation model in image alignment and shape matching . It consists of an affine component and a non-affine warping component. We will show that the non-linear warping part in TPS can provide an effective means of modeling cloth deformation for learning adversarial patterns of non-rigid objects.
TPS learns a parametric deformation mapping from an original image to a target image through a set of control points with given positions. Let denote the 2D location of an image pixel. The deformation from to is then characterized by the displacement of every pixel, namely, how a pixel at on image changes to the pixel on image at , where and , and and denote the pixel displacement on image along direction and direction, respectively. Given a set of control points with locations on image , TPS provides a parametric model of pixel displacement when mapping to 
where and are the TPS parameters, and represents the displacement along either or direction.
To determine in (2), TPS resorts to a regression model given the locations of control points on the transformed image , namely, . This then yields a regression problem which minimizes the distance between and and the distance between and , respectively. Here TPS (2) is applied to coordinate and separately (corresponding to parameters and ). The regression problem can be solved by the following linear system of equations 
where the th element of is given by , the th row of is given by , and the th elements of and are given by and , respectively.
The difficulty of implementing TPS for design of adversarial T-shirt is how to determine the set of control points and obtain positions and in both original and target images. Spurred by  for camera calibration, we print a checkerboard on a T-shirt and use it to collect control points and their positions between two video frames. In practice, we selected one frame as the anchor frame , then generate TPS from other frames. Figure 2 shows the T-shirt with the checkerboard pattern, where each intersection between two checkerboard grid regions is selected as a control point. We remark that the considered control points can be accurately detected using the Matlab vision toolbox , and the videos used to generate TPS transformations are independent of testing data for evaluation of the proposed adversarial T-shirt.
3 Generation of Adversarial T-shirt: An Optimization Perspective
In this section, we begin by formalizing the problem of adversarial T-shirt and introducing notations used in our setup. We then propose to design a universal perturbation used in our adversarial T-shirt deceiving a single object detector. We lastly propose a min-max (robust) optimization framework to design the universal adversarial patch against multiple object detectors.
Let denote video frames extracted from one or multiple given videos, where denotes the th frame. Let denote the universal adversarial perturbation applied to . The adversarial T-shirt is then characterized by , where is a bounding box encoding the position of the cloth region to be perturbed at the th frame, and denotes element-wise product. The goal of adversarial T-shirt is to design such that the perturbed frames of are mis-detected by object detectors.
Fooling a single object detector.
We generalize the Expectation over Transformation (EoT) method in  for design of adversarial T-shirt. Note that different from the conventional EoT, a transformers’ composition is required for generating adversarial T-shirt. For example, a perspective transformation on the bounding box of T-shirt is composited with TPS transformation on the perturbed cloth region.
Let us begin by considering two video frames, an anchor image (e.g., the first frame in the video) and a target image for 111 denotes the integer set .. Given the bounding boxes of the person () and the T-shirt () at , we apply the perspective transformation from to to obtain the bounding boxes and at image . In the absence of physical transformations, the perturbed image with respect to (w.r.t.) is given by
where the term denotes the background region outside the bouding box of the person, the term is the person-bounded region, the term erases the pixel values within the bounding box of the T-shirt, and the term is the newly introduced additive perturbation. Without taking into account physical transformations, Eq. (4) simply reduces to the conventional formulation of adversarial example .
We next consider two categories of physical transformations: a) TPS transformation applying to the adversarial perturbation for modeling the effect of cloth deformation, and b) conventional physical transformation applying to the region within the person’s bounding box, namely, . Here denotes the set of possible non-rigid transformations, and denotes the set of commonly-used physical transformations, e.g., scaling, translation, rotation, brightness, blurring and contrast. A modification of (4) under different sources of transformations is then given by
where is an additive random noise that allows the variation of pixel values, e.g., due to the mismatch between the digital color and the printed color, is a transformer modelling the environmental condition and we set it as brightness transformation in practice, and denotes a transformer applied to the image region characterized by a person’s bounding box.
With the aid of (3), the EoT formulation to fool a single object detector is cast as
where denotes an attack loss for misdetection, is the total-variation norm that enhances perturbations’ smoothness , is a regularization parameter, and signifies additional constraints that should follow, e.g., a discrete set of printable color options for generating physical adversarial examples .
Min-max optimization for fooling multiple object detectors.
The transferability of adversarial attacks largely drops in the physical environment, thus we consider a physical ensemble attack against multiple object detectors. It was recently shown in  that the ensemble attack can be designed from the perspective of min-max optimization, and yields much higher worst-case attack success rate than the averaging strategy over multiple models. Given
object detectors associated with attack loss functions, the physical ensemble attack is cast as
where are known as domain weights that adjust the importance of each object detector during the attack generation, is a probabilistic simplex given by , is a regularization parameter, and following (6). In (7), if , then the adversarial perturbation is designed over the maximum attack loss (worst-case attack scenario) since , where at a fixed . Moreover, if , then the inner maximization of problem (7) implies , namely, an averaging scheme over attack losses. Thus, the regularization parameter in (7) strikes a balance between the max-strategy and the average-strategy.
4 Experimental Results
In this section, we demonstrate the effectiveness of our approach for design of adversarial T-shirt by comparing it with baseline attack methods, adversarial patch to fool YOLOv2  and the variant of our approach in the absence of TPS transformation, namely, in (3). We examine the convergence behavior of our proposed algorithm as well as its attack success rate (ASR) in both digital and physical worlds.
4.1 Experimental Setup
We collect two datasets for learning and testing our proposed attack algorithm in both digital and physical worlds. The training dataset contains videos, each of which takes - seconds and is captured by a moving person wearing a T-shirt with printed checkerboard under different scenes. The desired adversarial pattern is then learnt from the training dataset. The second dataset contains videos captured in the same setting of the training dataset (but from different persons). This dataset is used to evaluate the attack performance of the learnt adversarial pattern in the digital world. In the physical world, we create an adversarial T-shirt by printing our learnt adversarial pattern on the T-shirt. The test videos are then collected from a moving person wearing the physical adversarial T-shirt. All videos are taken using an iPhone 7 Plus and are resized to 416 416.
We use two state-of-the-art object detectors: Faster R-CNN  and YOLOv2  to evaluate our method. These two object detectors are both pre-trained on COCO dataset  which contains 80 classes including ‘person’. The detection minimum threshold are set as 0.6 and 0.7 for Faster R-CNN and YOLOv2 by default respectively. For the misdetection loss in Eq. (6) and Eq. (7), we refer  and  for our attack against Faster R-CNN and YOLOv2, respectively.
Algorithmic parameter setting.
4.2 Adversarial T-shirt in digital world
Convergence performance of the proposed attack algorithm.
In Figure 3, we show the convergence of our proposed algorithm to solve problem (6), in terms of attack loss and attack success rate (ASR) against epoch number. Here ASR is given by the ratio of successfully attacked testing frames over the total number of testing frames. We see that the proposed attack method is well-behaved in convergence. We also note that attacking Faster R-CNN is more difficult than attacking YOLOv2.
ASR of adversarial T-shirt in various attack settings.
We perform a comprehensive evaluation on our methods in digital simulation. In Table 1, we compare ASR of adversarial T-shirt with or without using TPS transformation under 4 attack settings: a) Single-detector attack refers to adversarial T-shirt designed and evaluated using the same object detector; b) Transfer attack refers to adversarial T-shirt designed and evaluated using different object detectors; and c) ensemble attack (average) and ensemble attack (min-max) refer to the design of ensemble attack using the averaged attack loss and the min-max attack loss in (7), respectively. As we can see, it is crucial to incorporate TPS transformation in the design of adversarial T-shirt: ASR drops from 0.65 to 0.36 when attacking faster R-CNN and drops from 0.79 to 0.52 when attacking YOLOv2 in the single-detector attack setting. We also note that the transferability of single-detector attack is poor, and consistent with Figure 3, faster R-CNN is more robust than YOLOv2. Furthermore, we evaluate the effectiveness of the proposed min-max ensemble attack (7). As we can see, when attacking faster R-CNN, the min-max ensemble attack significantly outperforms its counterpart using the averaging strategy, leading to improvement in ASR. This improvement is at the cost of degradation when attacking YOLOv2.
|Model||w/o TPS||with TPS|
|transfer single-detector attack|
|ensemble attack (average)|
|ensemble attack (min-max)|
4.3 Adversarial T-shirt in physical world
Next, we evaluate our method in the physical world setting. We generate adversarial pattern by solving problem (6) against YOLOv2, same as Section 4.2. We then print the adversarial pattern and paste it on a white T-shirt, obtaining the adversarial T-shirt. At the testing phase, we use iPhone 5 to record videos for tracking a moving person wearing the obtained adversarial T-shirt. We compare our method with the baseline method in , where both methods are trained using the same dataset for design of adversarial patches with the same size. We show in Table 2 that our method achieves 63% ASR, which is much higher than 27% for baseline method when attacking YOLOv2. For comparison, we also present ASRs as attacking Faster R-CNN, which was not considered in .
|Model ASR||baseline||our method|
Figure 4 elaborates on our physical-world attack results in three settings: a) single moving person (row 1&2 of Figure 4), b) two moving persons wearing adversarial T-shirts generated using our method and the baseline method in  respectively (row 3), and c) two moving persons wearing the proposed adversarial T-shirt and the normal T-shirt respectively (row 4). As we can see, the baseline method failed in most of cases since it neglects the factor of T-shirt deformation, and was designed for generating adversarial pattern on a rigid object (cardboard). We also note even if the moving person is far from the camera, the proposed physical attack is still powerful. By contrast, the baseline method and the case of normal T-shirt can still be detected by an object detector. Compared to the digital results, ASRs in the physical world drop around .
compare ours with baseline
compare ours with normal
In this paper, we propose Adversarial T-shirt, the first successful adversarial wearable to evade detection of moving persons. Since T-shirt is a non-rigid object, its deformation induced by pose change of a moving person is taken into account when generating adversarial perturbations. We also propose a min-max ensemble attack algorithm to fool multiple object detectors simultaneously. We show that in both digital and physical worlds, our attack method can achieve 79% and 63% attack success rate (ASR), respectively. By contrast, the baseline method can only achieve 27% ASR. Based on our studies, we hope to provide some implications on how the adversarial perturbations can be implemented with human clothing, accessories, paint on face, and other wearables.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
-  K. Xu, S. Liu, P. Zhao, P.-Y. Chen, H. Zhang, Q. Fan, D. Erdogmus, Y. Wang, and X. Lin, “Structured adversarial attack: Towards general implementation and better interpretability,” in International Conference on Learning Representations, 2019.
-  P. Zhao, K. Xu, S. Liu, Y. Wang, and X. Lin, “Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions,” in Proceedings of the 24th Asia and South Pacific Design Automation Conference, pp. 499–505, ACM, 2019.
-  N. Carlini and D. Wagner, “Audio adversarial examples: Targeted attacks on speech-to-text,” in 2018 IEEE Security and Privacy Workshops (SPW), pp. 1–7, IEEE, 2018.
K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin, “Topology
attack and defense for graph neural networks: An optimization perspective,”
International Joint Conference on Artificial Intelligence (IJCAI), 2019.
-  A. Athalye, N. Carlini, and D. Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” arXiv preprint arXiv:1802.00420, 2018.
-  K. Xu, S. Liu, G. Zhang, M. Sun, P. Zhao, Q. Fan, C. Gan, and X. Lin, “Interpreting adversarial examples by activation promotion and suppression,” arXiv preprint arXiv:1904.02057, 2019.
-  M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540, ACM, 2016.
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” in, pp. 1625–1634, 2018.
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust
adversarial examples,” in
Proceedings of the 35th International Conference on Machine Learning(J. Dy and A. Krause, eds.), vol. 80, pp. 284–293, 10–15 Jul 2018.
-  K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramer, A. Prakash, T. Kohno, and D. Song, “Physical adversarial examples for object detectors,” in 12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018.
-  J. Lu, H. Sibai, and E. Fabry, “Adversarial examples that fool detectors,” arXiv preprint arXiv:1712.02494, 2017.
-  S.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau, “Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 52–68, Springer, 2018.
-  S. Thys, W. Van Ranst, and T. Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0, 2019.
-  Y. Cao, C. Xiao, D. Yang, J. Fang, R. Yang, M. Liu, and B. Li, “Adversarial objects against lidar-based autonomous driving systems,” arXiv preprint arXiv:1907.05418, 2019.
-  J. Li, F. Schmidt, and Z. Kolter, “Adversarial camera stickers: A physical camera-based attack on deep learning systems,” in International Conference on Machine Learning, pp. 3896–3904, 2019.
-  J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271, 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.
-  R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision, pp. 21–37, Springer, 2016.
-  H.-T. D. Liu, M. Tao, C.-L. Li, D. Nowrouzezahrai, and A. Jacobson, “Beyond pixel norm-balls: Parametric adversaries using an analytically differentiable renderer,” in International Conference on Learning Representations, 2019.
-  C. Sitawarin, A. N. Bhagoji, A. Mosenia, P. Mittal, and M. Chiang, “Rogue signs: Deceiving traffic sign recognition with malicious ads and logos,” arXiv preprint arXiv:1801.02780, 2018.
-  G. W. Ding, K. Y. C. Lui, X. Jin, L. Wang, and R. Huang, “On the sensitivity of adversarial robustness to input data distributions,” in International Conference on Learning Representations, 2019.
-  F. L. Bookstein, “Principal warps: Thin-plate splines and the decomposition of deformations,” IEEE Transactions on pattern analysis and machine intelligence, vol. 11, no. 6, pp. 567–585, 1989.
M. Jaderberg, K. Simonyan, A. Zisserman, et al.
, “Spatial transformer networks,” inAdvances in neural information processing systems, pp. 2017–2025, 2015.
-  H. Chui, “Non-rigid point matching: algorithms, extensions and applications,” 2001.
-  G. Donato and S. Belongie, “Approximate thin plate spline mappings,” in European conference on computer vision, pp. 21–31, Springer, 2002.
-  Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, 2000.
-  A. Geiger, F. Moosmann, Ö. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation, pp. 3936–3943, IEEE, 2012.
-  A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International Conference on Machine Learning, pp. 284–293, 2018.
-  J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, and B. Li, “Beyond adversarial training: Min-max optimization in adversarial attack and defense,” arXiv preprint arXiv:1906.03563, 2019.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision, pp. 740–755, Springer, 2014.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2015 ICLR, vol. arXiv preprint arXiv:1412.6980, 2015.