Evading Real-Time Person Detectors by Adversarial T-shirt

by   Kaidi Xu, et al.

It is known that deep neural networks (DNNs) could be vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decision makers by attaching adversarial patches to real objects. However, most of the existing works on physical adversarial attacks focus on static objects such as glass frame, stop sign and image attached to a cardboard. In this work, we proposed adversarial T-shirt, a robust physical adversarial example for evading person detectors even if it suffers from deformation due toa moving person's pose change. To the best of our knowledge, the effect of deformation is first modeled for designing physical adversarial examples with respect to non-rigid objects such as T-shirts. We show that the proposed method achieves 79 physical worlds respectively against YOLOv2. In contrast, the state-of-the-art physical attack method to fool a person detector only achieves 27 success rate. Furthermore, by leveraging min-max optimization, we extend our method to the ensemble attack setting against object detectors YOLOv2 and Faster R-CNN simultaneously.



page 2

page 4

page 7


Adversarial Texture for Fooling Person Detectors in the Physical World

Nowadays, cameras equipped with AI systems can capture and analyze image...

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...

Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector

Deep neural networks (DNNs) are vulnerable to adversarial attacks. In pa...

Practical Adversarial Attack Against Object Detector

In this paper, we proposed the first practical adversarial attacks again...

Note on Attacking Object Detectors with Adversarial Stickers

Deep learning has proven to be a powerful tool for computer vision and h...

Standard detectors aren't (currently) fooled by physical adversarial stop signs

An adversarial example is an example that has been adjusted to produce t...

UPC: Learning Universal Physical Camouflage Attacks on Object Detectors

In this paper, we study physical adversarial attacks on object detectors...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The vulnerability of deep neural networks (DNNs) against adversarial attacks (namely, perturbed inputs deceiving DNNs) has been found in applications spanning from image classification to speech recognition [1, 2, 3, 4, 5, 6, 7]. Early works studied adversarial examples in the digital space only. Recently, some works showed that it is possible to create adversarial perturbations on physical objects and fool DNN-based decision makers under a variety of real-world conditions [8, 9, 10, 11, 12, 13, 14, 15, 16]. However, most of the studied physical adversarial attacks encounter two limitations: a) the physical objects are usually considered being static, and b) the possible deformation of adversarial pattern attached to a moving object (e.g., due to pose change of a moving person) is commonly neglected. In this paper, we propose a new type of physical adversarial attack, adversarial T-shirt, to evade a real-time person detector when the person moves and wears the adversarial T-shirt; see the second and the third rows of Figure 1 for illustrative examples.

Most of the existing physical adversarial attacks were generated against image classifiers and object detectors. In


, a face recognition system is fooled by a real eyeglass frame designed under a crafted adversarial pattern. In

[9], a stop sign is misclassified by adding black or white stickers on it against image classification system. In [16], an image classifier is fooled by placing a crafted sticker at the lens of a camera. In [10], a so-called Expectation over Transformation (EoT) framework was proposed to synthesize adversarial examples robust to a set of physical transformations such as rotation, translation, contrast, brightness, and random noise. Compared to attacking image classifiers, generating physical adversarial attacks against object detectors is more challenging since the adversary is required to mislead both bounding box detector and object classifier. A well-known success is the generation of adversarial stop sign [11], which deceives state-of-the-art object detectors such as YOLOv2 [17] and Faster R-CNN [18].

The most relevant work to ours is [14], in which a person detector is fooled when the person holds a cardboard plate printed by an adversarial patch. However, such a physical attack restricts the adversarial patch to be attached to a rigid carrier (cardbolad), and is not directly applied to the design of adversarial T-shirt. We show that the attack proposed by [14] becomes ineffective when the adversarial patch is attached to a T-shirt (rather than a cardboard) and worn by a moving person (see the fourth row of Figure 1). At the technical side, different from [14] we propose a thin plate spline (TPS) based transformer to model the deformation effect of a non-rigid object, and we develop an ensemble physical attack that fools object detectors YOLOv2 and Faster R-CNN simultaneously. We highlight that the proposed adversarial T-shirt is not just a T-shirt with printed adversarial patch for clothing fashion, it is a physical adversarial wearable designed for evading person detectors in a real world.

Our work is also motivated by the importance of person detection on intelligent surveillance. DNN-based surveillance systems have significantly advanced the field of object detection [19, 20]. Efficient object detectors such as faster R-CNN [18], SSD [21], and YOLOv2 [17] have been deployed for human detection. Thus, one may wonder whether or not there exists a security risk for intelligent surveillance systems caused by adversarial human wearables, e.g., adversarial T-shirt. However, paralyzing a person detector in the physical world requires substantially more challenges such as low resolution, pose change and occlusion.


We summarize our contributions as follows.

  • We develop a TPS based transformer to model the temporal deformation of adversarial T-shirt caused by pose change of a moving person. We also show its importance to ensure the effectiveness of adversarial T-shirt in the physical world.

  • We propose a general optimization framework for design of adversarial T-shirt in both single-detector and multiple-detector settings.

  • We conduct experiments in both digital and physical worlds and show that the proposed adversarial T-shirt achieves 79% and 63% attack success rates respectively when attacking YOLOv2. By contrast, the physical adversarial patch [14] printed on a T-shirt only achieves 27% attack success rate. Some of our results are highlighted in Figure 1.

frame 5

frame 30

frame 60

frame 90

frame 120

frame 150


digital example

affine transformation

digital example

TPS transformation

proposed physical

adversarial T-shirt

baseline physical attack

Figure 1: Evaluating effectiveness of adversarial T-shirt to evade person detection by YOLOv2. Each row corresponds to a specific attack method, and each column denotes a video frame except the last column, which shows the generated adversarial pattern. There exist two persons at each frame, and only one person wears the adversarial T-shirt. First row: digital adversarial T-shirt generated with affine transformation (namely, in the absence of modeling deformation). Second row: digital adversarial T-shirt generated using TPS. Third row: physical adversarial T-shirt generated by our method. Fourth row: physical adversarial patch generated by [14] printed on a T-shirt.

2 Modeling Deformation of A Moving Object by Thin Plate Spline Mapping

In this section, we begin by reviewing some existing transformations required in the design of physical adversarial examples. We then elaborate on Thin Plate Spline (TPS) mapping used to model the possible deformation encountered by a moving and non-rigid object.

Let be an original image (or a video frame), and be the physical transformer. The transformed image under is given by


Existing transformations.

In [10], the parametric transformers include scaling, translation, rotation, brightness and additive Gaussian noise; see details in [10, Appendix D]. In [22]

, the geometry and lighting transformations are studied via parametric models. Other transformations including perspective transformation, brightness adjustment, resampling (or image resizing), smoothing and saturation are considered in

[23, 24]. All the existing transformations are included in our library of physical transformations. However, they are not sufficient to model the cloth deformation caused by pose change of a moving person. For example, the first and fourth rows of Figure 1 show that adversarial T-shirts designed against only existing physical transformations yield low attack success rates.

TPS transformation for cloth deformation.

A person’s movement can result in significant and constantly changing wrinkles (aka deformations) in her clothes. This makes it challenging to develop adversarial T-shirt effectively in the real world. To circumvent this challenge, we employ TPS mapping [25] to model the cloth deformation caused by human body movement. TPS has been widely used as the non-rigid transformation model in image alignment and shape matching [26]. It consists of an affine component and a non-affine warping component. We will show that the non-linear warping part in TPS can provide an effective means of modeling cloth deformation for learning adversarial patterns of non-rigid objects.

TPS learns a parametric deformation mapping from an original image to a target image through a set of control points with given positions. Let denote the 2D location of an image pixel. The deformation from to is then characterized by the displacement of every pixel, namely, how a pixel at on image changes to the pixel on image at , where and , and and denote the pixel displacement on image along direction and direction, respectively. Given a set of control points with locations on image , TPS provides a parametric model of pixel displacement when mapping to [27]


where and are the TPS parameters, and represents the displacement along either or direction.

To determine in (2), TPS resorts to a regression model given the locations of control points on the transformed image , namely, . This then yields a regression problem which minimizes the distance between and and the distance between and , respectively. Here TPS (2) is applied to coordinate and separately (corresponding to parameters and ). The regression problem can be solved by the following linear system of equations [28]


where the th element of is given by , the th row of is given by , and the th elements of and are given by and , respectively.

The difficulty of implementing TPS for design of adversarial T-shirt is how to determine the set of control points and obtain positions and in both original and target images. Spurred by [29] for camera calibration, we print a checkerboard on a T-shirt and use it to collect control points and their positions between two video frames. In practice, we selected one frame as the anchor frame , then generate TPS from other frames. Figure 2 shows the T-shirt with the checkerboard pattern, where each intersection between two checkerboard grid regions is selected as a control point. We remark that the considered control points can be accurately detected using the Matlab vision toolbox [30], and the videos used to generate TPS transformations are independent of testing data for evaluation of the proposed adversarial T-shirt.

(a) (b) (c) (d)
Figure 2: (a): examples of our T-shirt with printed checkerboard to construct control points for TPS transformation. (b) and (c): two frames with checkerboard detection results. (d): result of applying TPS transformation from (b) to (c).

3 Generation of Adversarial T-shirt: An Optimization Perspective

In this section, we begin by formalizing the problem of adversarial T-shirt and introducing notations used in our setup. We then propose to design a universal perturbation used in our adversarial T-shirt deceiving a single object detector. We lastly propose a min-max (robust) optimization framework to design the universal adversarial patch against multiple object detectors.

Let denote video frames extracted from one or multiple given videos, where denotes the th frame. Let denote the universal adversarial perturbation applied to . The adversarial T-shirt is then characterized by , where is a bounding box encoding the position of the cloth region to be perturbed at the th frame, and denotes element-wise product. The goal of adversarial T-shirt is to design such that the perturbed frames of are mis-detected by object detectors.

Fooling a single object detector.

We generalize the Expectation over Transformation (EoT) method in [31] for design of adversarial T-shirt. Note that different from the conventional EoT, a transformers’ composition is required for generating adversarial T-shirt. For example, a perspective transformation on the bounding box of T-shirt is composited with TPS transformation on the perturbed cloth region.

Let us begin by considering two video frames, an anchor image (e.g., the first frame in the video) and a target image for 111 denotes the integer set .. Given the bounding boxes of the person () and the T-shirt () at , we apply the perspective transformation from to to obtain the bounding boxes and at image . In the absence of physical transformations, the perturbed image with respect to (w.r.t.) is given by


where the term denotes the background region outside the bouding box of the person, the term is the person-bounded region, the term erases the pixel values within the bounding box of the T-shirt, and the term is the newly introduced additive perturbation. Without taking into account physical transformations, Eq. (4) simply reduces to the conventional formulation of adversarial example .

We next consider two categories of physical transformations: a) TPS transformation applying to the adversarial perturbation for modeling the effect of cloth deformation, and b) conventional physical transformation applying to the region within the person’s bounding box, namely, . Here denotes the set of possible non-rigid transformations, and denotes the set of commonly-used physical transformations, e.g., scaling, translation, rotation, brightness, blurring and contrast. A modification of (4) under different sources of transformations is then given by


where is an additive random noise that allows the variation of pixel values, e.g., due to the mismatch between the digital color and the printed color, is a transformer modelling the environmental condition and we set it as brightness transformation in practice, and denotes a transformer applied to the image region characterized by a person’s bounding box.

With the aid of (3), the EoT formulation to fool a single object detector is cast as


where denotes an attack loss for misdetection, is the total-variation norm that enhances perturbations’ smoothness [11], is a regularization parameter, and signifies additional constraints that should follow, e.g., a discrete set of printable color options for generating physical adversarial examples [8].

Min-max optimization for fooling multiple object detectors.

The transferability of adversarial attacks largely drops in the physical environment, thus we consider a physical ensemble attack against multiple object detectors. It was recently shown in [32] that the ensemble attack can be designed from the perspective of min-max optimization, and yields much higher worst-case attack success rate than the averaging strategy over multiple models. Given

object detectors associated with attack loss functions

, the physical ensemble attack is cast as


where are known as domain weights that adjust the importance of each object detector during the attack generation, is a probabilistic simplex given by , is a regularization parameter, and following (6). In (7), if , then the adversarial perturbation is designed over the maximum attack loss (worst-case attack scenario) since , where at a fixed . Moreover, if , then the inner maximization of problem (7) implies , namely, an averaging scheme over attack losses. Thus, the regularization parameter in (7) strikes a balance between the max-strategy and the average-strategy.

4 Experimental Results

In this section, we demonstrate the effectiveness of our approach for design of adversarial T-shirt by comparing it with baseline attack methods, adversarial patch to fool YOLOv2 [14] and the variant of our approach in the absence of TPS transformation, namely, in (3). We examine the convergence behavior of our proposed algorithm as well as its attack success rate (ASR) in both digital and physical worlds.

4.1 Experimental Setup

Data collection.

We collect two datasets for learning and testing our proposed attack algorithm in both digital and physical worlds. The training dataset contains videos, each of which takes - seconds and is captured by a moving person wearing a T-shirt with printed checkerboard under different scenes. The desired adversarial pattern is then learnt from the training dataset. The second dataset contains videos captured in the same setting of the training dataset (but from different persons). This dataset is used to evaluate the attack performance of the learnt adversarial pattern in the digital world. In the physical world, we create an adversarial T-shirt by printing our learnt adversarial pattern on the T-shirt. The test videos are then collected from a moving person wearing the physical adversarial T-shirt. All videos are taken using an iPhone 7 Plus and are resized to 416 416.

Object detectors.

We use two state-of-the-art object detectors: Faster R-CNN [18] and YOLOv2 [17] to evaluate our method. These two object detectors are both pre-trained on COCO dataset [33] which contains 80 classes including ‘person’. The detection minimum threshold are set as 0.6 and 0.7 for Faster R-CNN and YOLOv2 by default respectively. For the misdetection loss in Eq. (6) and Eq. (7), we refer [13] and [14] for our attack against Faster R-CNN and YOLOv2, respectively.

Algorithmic parameter setting.

When solving Eq. (6), we use Adam optimizer [34]

to train 3500 epochs, and the learning rate is set to

and decay to at 750th epochs. The regularization parameter for total-variation norm is set as . In Eq. (7), we set as 1, and solve the min-max problem by 5000 epochs with initial learning rate .

4.2 Adversarial T-shirt in digital world

Convergence performance of the proposed attack algorithm.

In Figure 3, we show the convergence of our proposed algorithm to solve problem (6), in terms of attack loss and attack success rate (ASR) against epoch number. Here ASR is given by the ratio of successfully attacked testing frames over the total number of testing frames. We see that the proposed attack method is well-behaved in convergence. We also note that attacking Faster R-CNN is more difficult than attacking YOLOv2.

Figure 3: Left: Attack loss vs. epoch numbers when generating perturbations by solving problem (6) for Faster R-CNN and YOLOv2. Right: ASR vs. epoch numbers.

ASR of adversarial T-shirt in various attack settings.

We perform a comprehensive evaluation on our methods in digital simulation. In Table 1, we compare ASR of adversarial T-shirt with or without using TPS transformation under 4 attack settings: a) Single-detector attack refers to adversarial T-shirt designed and evaluated using the same object detector; b) Transfer attack refers to adversarial T-shirt designed and evaluated using different object detectors; and c) ensemble attack (average) and ensemble attack (min-max) refer to the design of ensemble attack using the averaged attack loss and the min-max attack loss in (7), respectively. As we can see, it is crucial to incorporate TPS transformation in the design of adversarial T-shirt: ASR drops from 0.65 to 0.36 when attacking faster R-CNN and drops from 0.79 to 0.52 when attacking YOLOv2 in the single-detector attack setting. We also note that the transferability of single-detector attack is poor, and consistent with Figure 3, faster R-CNN is more robust than YOLOv2. Furthermore, we evaluate the effectiveness of the proposed min-max ensemble attack (7). As we can see, when attacking faster R-CNN, the min-max ensemble attack significantly outperforms its counterpart using the averaging strategy, leading to improvement in ASR. This improvement is at the cost of degradation when attacking YOLOv2.

Model w/o TPS with TPS
single-detector attack
Faster R-CNN 36% 65%
YOLOv2 52% 79%
transfer single-detector attack
Faster-RCNN 8% 9%
YOLOv2 12% 11%
ensemble attack (average)
Faster-RCNN 21% 41%
YOLOv2 36% 69%
ensemble attack (min-max)
Faster-RCNN 47% 60%
YOLOv2 33% 65%
Table 1: The ASR () of adversarial T-shirt with or without (w/o) using TPS transformation under four attack settings.

4.3 Adversarial T-shirt in physical world

Next, we evaluate our method in the physical world setting. We generate adversarial pattern by solving problem (6) against YOLOv2, same as Section 4.2. We then print the adversarial pattern and paste it on a white T-shirt, obtaining the adversarial T-shirt. At the testing phase, we use iPhone 5 to record videos for tracking a moving person wearing the obtained adversarial T-shirt. We compare our method with the baseline method in [14], where both methods are trained using the same dataset for design of adversarial patches with the same size. We show in Table 2 that our method achieves 63% ASR, which is much higher than 27% for baseline method when attacking YOLOv2. For comparison, we also present ASRs as attacking Faster R-CNN, which was not considered in [14].

Model ASR baseline our method
Faster R-CNN 11% 52%
YOLOv2 27% 63%
Table 2: ASRs of our method and baseline method in [14] when attacking YOLOv2 and Faster R-CNN in the physical world.

Figure 4 elaborates on our physical-world attack results in three settings: a) single moving person (row 1&2 of Figure 4), b) two moving persons wearing adversarial T-shirts generated using our method and the baseline method in [14] respectively (row 3), and c) two moving persons wearing the proposed adversarial T-shirt and the normal T-shirt respectively (row 4). As we can see, the baseline method failed in most of cases since it neglects the factor of T-shirt deformation, and was designed for generating adversarial pattern on a rigid object (cardboard). We also note even if the moving person is far from the camera, the proposed physical attack is still powerful. By contrast, the baseline method and the case of normal T-shirt can still be detected by an object detector. Compared to the digital results, ASRs in the physical world drop around .

our method

baseline method

compare ours with baseline

compare ours with normal

Figure 4: Some testing frames in the physical world using adversarial T-shirt against YOLOv2. First row: detecting a single person wearing the proposed adversarial T-shirt. Second row: detecting a single person detection wearing the adversarial T-shirt generated from the baseline method [14]. Third row: detecting two moving persons wearing two adversarial T-shirts generated from our method and the baseline method, respectively. Fourth row: detecting two moving persons wearing the proposed adversarial T-shirt and the normal cloth, respectively.

5 Conclusion

In this paper, we propose Adversarial T-shirt, the first successful adversarial wearable to evade detection of moving persons. Since T-shirt is a non-rigid object, its deformation induced by pose change of a moving person is taken into account when generating adversarial perturbations. We also propose a min-max ensemble attack algorithm to fool multiple object detectors simultaneously. We show that in both digital and physical worlds, our attack method can achieve 79% and 63% attack success rate (ASR), respectively. By contrast, the baseline method can only achieve 27% ASR. Based on our studies, we hope to provide some implications on how the adversarial perturbations can be implemented with human clothing, accessories, paint on face, and other wearables.