Can 3D Adversarial Logos Cloak Humans?

With the trend of adversarial attacks, researchers attempt to fool trained object detectors in 2D scenes. Among many of them, an intriguing new form of attack with potential real-world usage is to append adversarial patches (e.g. logos) to images. Nevertheless, much less have we known about adversarial attacks from 3D rendering views, which is essential for the attack to be persistently strong in the physical world. This paper presents a new 3D adversarial logo attack: we construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo via a texture mapping called logo transformation. The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position. This greatly extends the versatility of adversarial training for computer graphics synthesized imagery. Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering. In addition, and unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations, leading to one step further for realistic attacks in the physical world. Our codes are available at



page 1

page 4

page 6

page 7


Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors

This paper presents a novel patch-based adversarial attack pipeline that...

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

Object detection plays a key role in many security-critical systems. Adv...

Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

Although there are a great number of adversarial attacks on deep learnin...

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

Physical adversarial attacks in object detection have attracted increasi...

DTA: Physical Camouflage Attacks using Differentiable Transformation Network

To perform adversarial attacks in the physical world, many studies have ...

UPC: Learning Universal Physical Camouflage Attacks on Object Detectors

In this paper, we study physical adversarial attacks on object detectors...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Examples of our 3D adversarial logo attack on different 3D object meshes to fool a YOLOV2 detector. The 3D adversarial patch (as a logo “G”) is viewed as part of the textures map over 3D human mesh models. When rendering the 3D mesh scene with implanted 3D adversarial logos, and from multiple different angle views (from to degrees) and with human postures, the attack stays robust causing mis-recognition, i.e. making recognized humans “disappear”. The first, second and third column shows rendering results for a -10 degree angle view, a 0 degree angle view and a 10 degree angle view, respectively. In each case, the human with our adversarial logo is not recognized by a YOLOV2 detector.

Deep neural networks are notoriously vulnerable: to human-imperceivable perturbations or doctoring of images, resulting in the trained algorithms, drastically changing their recognition and predictions. To test the mis-recognition or mis-detection vulnerability,


propose 2D adversarial attacks, manipulating pixels on the image while maintaining overall visual fidelity. This negligible perturbation to human eyes causes drastically false conclusions with high confidence by trained deep neural networks. Numerous adversarial attacks have been designed and tested on deep learning tasks such as image classification and object detection. Among extensive efforts, the focus recently has shifted to only structurally editing certain local areas on an image, known as

patch adversarial attacks [2]. Thys et al.[25] propose a pipeline to generate a 2D adversarial patch and attach it to image pixels of humans appearing in 2D images. In principle, a person with this 2D adversarial patch will fool or become “invisible” from deep learned human image detectors. However, such 2D image adversarial patches are often not robust to image transformations, and especially under multi-view 2D image synthesis in reconstructed 3D computer graphics settings. Examining 2D image renderings from 3D scene models using various possible human postures and different angle-view of humans, the 2D attack can easily lose its own strength under such 3D viewing transformations. Moreover, while square or rectangular adversarial patches are typically under consideration, more shape variations and their implications for the attack performance have rarely been discussed before.

Can we naturally stitch a patch onto human clothes to make the adversarial attack more versatile and realistic? The defect in pure 2D scenarios leads us to consider the 3D adversarial attack, where we view a person as a 3D object instead of its 2D projection. As an example, the domain of mesh adversarial attack [30] refers to deformations in the mesh’s shape and texture level to fulfill the attack goal. However, these 3D adversarial attacks were not yet justified the concept of patch adversarial attack; they view the entire texture and geometric information of 3D meshes as attackable. Moreover, a noticeable branch of researches shows that 2D images with infinitesimal rotation and shift may cause huge perturbation in predictions [33, 1, 5], no matter how negligible to human eyes. What if the perturbation does not come from 2D scenarios and conditions (, 2D rotation and translation), but rather results from physical world change, like 3D view rotations and body postures changes? Furthermore, effective attacks on certain meshes do not imply a generalized effectiveness among other meshes, , the attack can be failed when changing to a different clothes mesh. Those downsides motivate us to develop more generalizable 3D adversarial patches.

The primary aim of this work is to generate a structured patch in an arbitrary shape (called a “logo” by us), termed as a 3D adversarial logo that, when appended to a 3D human mesh, then rendered into 2D images, can consistently fool the object detector under different human postures . A 3D adversarial logo is defined over a subregion on 3D mesh that can alter the textures and position. Then 3D human meshes along with 3D adversarial logos are rendered on top of real-life background images. The specific contributions of our work are highlighted as:

  • We propose a logo transformation pipeline to map an arbitrary 2D shape (“logo”) into a mesh to form the proposed 3D adversarial logos. Moreover, the 3D adversarial logo is updated when the loss is propagated back from the 2D adversarial logo, and eventually propagated to the texture image. The pipeline can be easily extended to multiple-mesh joint training.

  • We propose a general 3D-to-2D adversarial attack protocol via physical rendering equipped with differentiability. We render 3D meshes, with the 3D adversarial logo attached on, into 2D scenarios and synthesize images that could fool the detector. The shape of our 3D adversarial logo comes from the selected logo texture in the 2D domain. Hence, we can perform versatile adversarial training with shape and position controlled.

  • We justify that our model can adapt to multi-angle scenarios with much richer variations than what can be depicted by 2D perturbations, taking one important step towards studying the physical world fragility of deep networks.

2 Related Work

2.1 Differentiable Mesh

Various tasks, including depth estimation as well as 3D reconstruction from 2D images, have been explored with deep neural networks and witnessed successes. Less considered is the reverse problem: How can we render the 3D model back to 2D images to fulfill desired tasks?

Discrete operations in the two most popular rendering methods (ray-tracing and rasterization) hamper the differentiability. To fill in the gap, numerous approaches have been proposed to edit mesh texture via gradient descent, which provides the ground to combine traditional graphical renderer with neural networks. Nguyen-Phuoc et al. [17] propose a CNN architecture leveraging a projection unit to render a voxel-based 3D object into 2D images. Unlike the voxel-based method, Kato et al. [12]

adopt linear-gradient interpolation to overcome vanishing gradients in rasterization-based rendering. Raj et al.

[18] generate textures for 3D mesh through photo-realistic pictures. They then apply RenderForCNN [23] to sample the viewpoints that match the ones of input images, followed by adapting CycleGAN [35] to generate textures for 2.5D information rendered in the generated multi-viewpoints, and eventually merge these textures into a single texture to render the object into the 2D world.

2.2 Adversarial Patch in 2D Images

Adversarial attacks [24, 8, 10, 4, 9] are proposed to analyze the robustness of CNNs, and recently are increasingly studied in object detection tasks, in the form of adversarial patches. For example, [3] provides a stop sign attack to Fast-RCNN [7], and [25] is fooling the YOLOv2 [19] object detector through pixel-wise patch optimization. The target patch with simple 2D transformations (such as rotation and scaling) is applied to a near-human region in 2D real photos and then trained to fool with the object detector. To demonstrate realistic adversarial attacks, they physically let a person hold the 3D-printed patch and verify them to ”disappear” in the object detector. Nevertheless, such attacks are easily broken w.r.t. real-world 3D variations as pointed out by [16]. Wiyatno et al. [28] propose to generate physical adversarial texture as a patch in backgrounds. Their method allows the patch ”rotated” in 3D space and then added back to 2D space. Xu et al. [31] discusses how to incorporate physical deformation of T-shirts into patch adversarial attacks, leading a forward step yet only in a fixed camera view. A recent work by Huang et al. [11] attacks region proposal network (RPN) by synthesizing semantic patches that naturally anchored onto human cloth in the digital space. They test the garment in the physical world with motions and justify their result in digital space.

2.3 Mesh Adversarial Attack

A 2D object can be considered as a projection from the 3D model. Therefore, attacking from 3D space and then map to 2D space can be seen as a way of augmenting perturbation space. In recent two years, different adversarial attacks scheme over 3D meshes have been proposed. For instance, Tsai et al. [27]

perturb the position of the point cloud to generate an adversarial mesh that fools 3D shape classifiers. Ti et al.

[14] alter lighting and geometry information of a physical model, to generate adversarial attacks, by modeling the pixel in natural images as an interaction result of lighting condition and physical scene, such that the pixel can maintain its natural appearance. More recently, Xiao et al. [30] and Zeng et al. [32] generate adversarial samples by altering the physical parameters ( illumination) of rendering results from target objects. They generate meshes with negligible perturbations to the texture and show that under certain rendering assumptions( fixed camera view), the adversarial mesh can remain to deceive state-of-the-art classifiers and detectors. Overall, most existing works perturb the image global texture , while the idea of generating an adversarial sub-region/patch remains unexplored in the 3D mesh domain.

3 The Proposed Framework

Figure 2: The overall framework of our work. We start by choosing a logo texture image, which could be controlled to vary its brightness, contrast, and noise as augmentation. Then we construct textures map that only maps into specific regions over each person’s mesh to form 3D adversarial logos. We next apply a differentiable renderer to render the mesh together with its adversarial logo to synthesize the 2D person and 2D adversarial logo. The 2D adversarial logo will then be synthesized with the person image and background images to generate training/testing images. Eventually, the training/testing images are fed into a target object detector for adversarial training/testing.

In this section, we seek a concrete solution to the 3d adversarial logo attack, with the following goals in mind:

  • The adversarial training is differentiable: we can modify the source logo via end-to-end loss back-propagation. The major challenge is to replace a traditional discrete render into a differentiable one and to update corresponding texture maps over each mesh.

  • The 3D adversarial logo is universal: for every distinct human mesh, we hope to stitch the 3D adversarial logo generates from the identical 2D logo texture. During both training and testing, we only modify the rendered texture of that logo to have it change concurrently with the human mesh textures.

Our 3D adversarial logo attack pipeline is outlined in Figure 2. In the training process, we first define a target logo texture with given shape in the 2D domain (, the character “H”), and perturb the logo texture by random noise, contrast, and brightness. Then we map the logo texture to 3D surfaces to form 3D adversarial logos on different meshes. Then each human mesh and its 3D adversarial logo are together rendered into a 2D person image associated with its 2D adversarial logo***We refer to the 2D adversarial logo as the 2D image counterpart that the 3D adversarial logo is rendered into.. These images of person with a logo are further synthesized with background images, after which we stream these synthesized images into the object detector for adversarial training.

Due to the end-to-end differentiability, the training process updates the 3D adversarial logo via back-propagation, and further be back-propagated to the logo texture. Within one epoch the above process will be conducted on all training meshes until all background images are trained with, therefore ensuring the logo’s universal applicability to different meshes.

3.1 Logo Transformation

Different from existing 3D attacks, our network aims at a universal patch attack across different input human meshes. This is achieved by editing the 3D adversarial logo over multiple meshes concurrently. Due to discrete polygonal mesh settings, as well as the high possible degrees of distortions and deformations in different 3D meshes, training one universal adversarial logo is highly challenging. We illustrate our logo transformation strategy below, which offers one explicit construction of textures coordinate map for each 3D logo to generate our 3D adversarial logo. Detailed implementations are included in supplementary materials.

Given the logo texture define as an RGB image, in order to convert it into a 3D logo that can be edited on a single human mesh, our proposed logo transformation comprises of two basic operations:

  • 2D Mapping (): Project a 3D logo surface onto the 2D domain to generate texture coordinate mapping.

  • 3D Mapping (): Extract color information from the logo texture and map color information onto each face over a 3D logo to composite a 3D adversarial logo.

The overall logo transformation can be denoted as:


With the logo transformation, the chosen 2D logo shape is mapped to 3D logo on each distinct human mesh to form the 3D adversarial logo . By leveraging a differentiable renderer (to be discussed in Section 3.2), when rendering the 3D adversarial logos into adversarial logo images, the updates will be back-propagated from those images to 3D adversarial logos and thereby all the update information are aggregated to the logo texture. In our work, the logo transformation is only constructed once and we fix the texture coordinate map for each human mesh. When forwarding to the detector, the color of our 3D adversarial logos is obtained via distinctive texture coordinate maps and hence we can update all 3D adversarial logos based on logo texture in synchronization.

Figure 3: Our Logo Transformation scheme. 3D logos are detached from the person mesh as submeshes, then they are mapped to the logo texture via texture mapping(2D Mapping) we construct. 3D adversarial logos are generated by assigning color information onto 3D logos from the logo texture.

3.2 Differentiable Renderer

A differentiable renderer can take meshes as input and update the mesh’s texture via back-propagation for different purposes. Our work is built upon a specific renderer called Neural 3D Mesh Renderer [13], while any other differentiable renderer shall serve our goal here.

The Neural Renderer proposed in [13] generates color cubes for each face over the mesh. By adopting an approximate rasterized gradient, where piece-wise constant functions are approximated via linear interpolations, as well as centroid color sampling, the renderer is capable of editing mesh’s texture through back-propagation. We apply the Neural Renderer to render the 3D logo output from the logo transformation, into various 2D adversarial logos, and meanwhile, render the 3D human mesh. The last step is to attach an (augmented) 2D adversarial logo to the corresponding rendered 2D person image, and then synthesize with a real background image, yielding the final 2D adversarial image that aims to fool the object detector. During back-propagation, the update in 2D adversarial logo images will be fed back to the 3D adversarial logo, and eventually back to the initial input of logo texture, thanks to the renderer’s differentiability.

3.3 Training Against 3D Adversarial Logo Attacks

The aim of our work is to generate a 3D adversarial logo on human mesh and the human mesh with this 3D adversarial logo can fool the object detector when it is rendered into a 2D image. We next discuss how we compose the training loss to achieve this goal.

Disappearance Loss

To fool an object detector is to diminish the confidence within bounding boxes that contain the target object. Such that it cannot be detected. We exploit the disappearance loss [6], which takes the maximum confidence of all bounding boxes that contain the target object as the loss:


where is the image streamed into the object detector and is the object class label, is the object detector that outputs bounding box predictions, and calculate the maximum confidence among all the bounding boxes.

Total Variance Loss

To further smooth the predictions over augmented 2D adversarial logos and avoid inconsistent predictions, a total variance loss is enforced



where is denoted as the differentiable renderer in Section 3.2. We apply such a notation to emphasize the loss is computed over pixel values of 2D adversarial logos, and is one specific pixel value at coordinate . This loss is added to improve physical realizability.

The overall training loss we are minimizing is composed of the above two losses ( and are the hyper-parameters):


4 Experiments and Results

Figure 4: Examples of our adversarial attack to YOLOV2. We perform multi-angle training over three meshes. To justify our work can attack in different rendering angles, we attach our adversarial logo into different positions on different human meshes (one is in the front while the other is at the back). We display the results of logo “G” (first row) and logo “H” (second row) under three different views (-10, 0, 10 degrees). The result in three different angle views conceptually verifies that our adversarial logos can prevent the objector detector from recognizing the person in different poses, even if perturbed by 3D rotations.

4.1 Dataset Preparation

Figure 5: An angle view setting example. From left to right: degree, degree and degree for one background image and one human model.
Generating Multiple Angle views

We generate 2D adversarial images via the Neural Renderer under varying angle views. Specifically, we pick up one specific angle view as our benchmark view of degree. We fix the rendering camera view and rotate the 3D human model. We denote “” for counterclockwise rotation while “” for clockwise rotation. Figure 5 presents an example of our angle view settings. When synthesized with background images, we always assume the fixed camera view captures the background, so there is no further cropping, translation nor rotation operation for background images.

Background Images

For training backgrounds, we crawl a set of real photos from Google search, using keywords street, avenue, park, and lawn. We manually inspect all photos and discard those that contain a person or other object distractors. We end up with 312 “clean background”-style images that are used for training. There are multiple synthesized images generated per background image, depending on how many angle-views we sample. For most of our experiments (except for single-angle training, see Section 4.3.1), we sample over 21 views, leading to a training set of size 6,741. We scale up testing background images by sampling over images from MIT places dataset [34] with the same criterion, generating 32,000 “clean background“-style images for testing.

Mesh Model Data

For the mesh data, we select three publicly available 3D human models and contain the source mesh. with complete textures and coordinate maps. We edit each mesh to select faces that form our 3D logos under given 2D shape contours. Afterward, we process every 3D logo via OpenMesh 8.0 to extract its centroid coordinates for our logo transformation step.

4.2 Implementation Details

All experiments are implemented in PyTorch 1.0.0, along with the Neural Renderer PyTorch implementation


. We choose the default hyperparameters in the Neural Renderer as: elevation is 0, camera distance is 2.0, ambient light is 1.0, light direction is 0.0 and the cubic size

in Section 3.2 is 4. For data augmentation, we add the contrast uniformly generated from 0 to 1, the brightness uniformly generated from 0 to 1, and the noise uniformly generated from to . All three are added pixelwise. The training is conducted on one Nvidia GTX 1080TI GPU. The default optimizer is Adam, and the learning rate is initialized as 0.03 and decays by a factor of 0.1 every 50 epochs.

During all experiments, and in (4) respectively. The batch size of background images is set as 8, while the batch size of synthesizing rendering meshes is set to 1. The total training epochs in the single-angle training and multiple-angle training (explained in Section 4.3.1) are 100 and 20 respectively.

The default object detectors used are YOLOv2 [19] and YOLOv3 [20], with confidence thresholds set to 0.6 and 0.7 respectively. Shapes of logo texture we mostly exhibit are characters of “G” and “H”, while more character shapes are investigated in our study as well.

4.3 Experiment Results and Analysis

4.3.1 Single- and Multi-angle Evaluation

Single-angle training

We first apply our 3D adversarial attack pipeline over single-angle rendering images as single-angle training. We illustrate the main idea using degree, for example. Given a mesh, we synthesize our 2D images with background images in the training set under the degree angle view. Then we train over those images to obtain our adversarial logos and logo texture. The testing images are rendered under the same degree view and synthesized with test backgrounds. The attack success rate denotes the ratio of testing samples that that target detector misses the person. Table 1’s first column compares the results: as a proof-of-concept, our proposed attack successfully compromises both YoloV2 and YoloV3 (appear to be relatively more robust), under both “H” and “G” logo shapes.

Multi-angle training

We then extend to a joint 3D view training setting called multi-angle training. We uniformly sample the degrees between [-10, 10] with one-degree increment, leading to 21 discrete rendering views. Under this setting, both the training set and test set are enlarged by a multiplier of . We compute our multi-angle success rate by considering averaging the success rates across all views. Results are summarized in the second column of Table 1. As can be seen in table 1, a lower success rate implies that the multi-angle attack is more challenging compared to the single-angle attack, where no perturbation is considered.

The numbers we report in Table 1 are consistent with our visual results. Some selective images of our multi-angle training are shown in Figure 4. As one could observe, our adversarial logo can mislead the pre-trained object detector and make the person unrecognizable under both logo “G” and logo “H”.

Object Detector Attack Success Rate
single angle view multiple angle views
YoloV2 (Baseline) 0.01 0.01
YoloV3 (Baseline) 0.01 0.01
YoloV 0.86 0.88
YoloV 0.91 0.74
YoloV 0.79 0.67
YoloV 0.60 0.41
Table 1: Results of logos “G” and “H” in single-angle and multiple-angle training. The baselines are by applying detectors to the synthesized human images without adversarial logos.

4.3.2 Attack to unseen angle views

Single-angle training with Multi-angle attack

To prove our method is robust against 3D rotations, we conduct a multi-angle attack with single-angle training. We first train at degree, but use views ( degree) to attack the detector. Results shown in Figure 9(b) proves our method is stable against small model angle perturbations. Figure 6 provides an example where our 3D adversarial logo hides the person from the detector. Though indistinguishable from human eyes, tiny perturbations in images cannot be underestimated in adversarial attacks [33, 1, 5]. Nevertheless, our method is not affected by pixel-level changes that could collapse other 2D patch adversarial works.

Figure 6: A success attack under unseen views with single-angle training. Our 3D adversarial logo can fool YOLOV2 detector under all angle views, of which are never seen by the detector when training.
Multi-angle training with unseen-angle attack

We extend our experiments to test robustness under more angle views. After multi-angle training with views in degree, we attack the detector using all angle views in degree. Figure 7 is plotted based on our success attack rate overall test images and we provide examples of unseen view attacks in Figure 8. The plot in Figure 7 reveals the limitations of our works. When our rendering view is away from training views, we observe a decaying curve that converges to eventually. One plausible explanation is that clipping of our adversarial logos in rendering pipeline leads to loss of information from our logo textures, thereby severely affects our performance.

Figure 7: The attack success rate for each angle view with multi-angle training. The detector is YOLOV2 and the training views ( degree) are highlighted within dashed lines. There exists a massive performance drop when angle view change is relatively large.
Figure 8: Our attack result for YOLOV2 under unseen angle views. When the camera rotates drastically, part of our adversarial logo disappeared from the rendered image, and our attack fails when the unseen view is 50 degree.

4.3.3 Comparison with adversarial patch attacks

For comparison fairness with previous adversarial patch attacks (cf. [25, 3, 6, 29]), we perform single-angle training in our pipeline and conduct conventional 2D patch adversarial attack [25] as follows: i) We apply masks to generate 2D patches that obtain identical shape as our logo texture. ii) Different from our 3D adversarial attack, We add 2D perturbations (translation and rotation) to optimize the performance of 2D patch adversarial attacks. iii) The comparison is to perform the same testing setup, but replace the logo trained in our proposed method with the masked patch trained via state-of-the-art. In other words, the major difference is whether 3D perturbations are considered during training.

We compare the performance of two schemes under the multi-angle attack. We synthesize our test images with test background images under different views ( degree), by applying the identical rendering scheme for both 2D and 3D methods (by replacing logo texture with 2D adversarial patch when performing 2D patch multi-angle testing). Only by this procedure can we ensure the rendered logos share the same positions and same perturbation under model angle changes. We compare attack success rate (defined in Section 4.3.1) between two methods and report the result in Figure 9. We observe that our method outperforms the conventional 2D patch adversarial attack scheme. Even at the training view (0 degree), the 2D method fails, due to the distortion when our logo is dropped on clothes via rendering. This experiment indicates our adversarial logos can naturally shift as the physical scene changes. We do not require the patch is either center-aligned or at a fixed position, whereas the method in [25] obtains a position-sensitive adversarial patch.

Figure 9: The attack success rate for each angle view in degree. We test performance between the 2D adversarial patch attack with our 3D adversarial logo attack on YOLOV2. The dashed line emphasizes that we only train in a single-angle but test in multiple angle views.

4.3.4 Shape adaptivity

Although our 3D adversarial logo attack is not restricted to the shape of the logo, the result in different shapes reveals that numerical differences originated from different shapes are not negligible, as seen in Table 1 and Figure 10. We select six different shapes (character “G”,“O”,“C”,“X”,“T”,“H”) as contour of our logo texture. The motivation is that the first three characters contain curves and complicated 2D contours while the latter three consist of parallel and regular contours. We believe these characters can cover all common cases when designing a new shape of logo texture. Figure 11 compare their performance when performing multi-angle attack under single-angle training. It can be seen that the more regular and symmetry the shape is, the better attack success rate is attained. However, this does not hold for the logo of the character “T”. A possible explanation is that the symmetry along a horizontal axis might be most crucial to deceive the detector. Both “C” and “G” outperform “T”, making the letter “T” the worst result among all logos.

Figure 10: Our shape gallery. Trained logo textures from left to right are: “G”,“O”,“C”,“X”,“T”,“H”.
Figure 11: The attack success rate among six different logo shapes. We conduct a single-angle training (0 degree) and test under different views, with only the shape of logo texture altered.

4.3.5 Attack to unseen human mesh

One goal of our framework is to transfer one logo texture across all 3D logos on distinctive meshes, and only the logo texture will be updated via back-propagation. This setting brings us joint mesh training and promising one-for-all generalization to attack the detector in different meshes. In Figure 12, we justify our joint-training maintains consistent performance over unseen meshes. We use two meshes for training and use the third mesh(The woman) to test our generated adversarial logo performance. The mean attack success rate is %. Although we did not observe the same attack success rate compared to same-mesh training, our method remains its potential to attack in more generalizable cases where logos are shown on different humans.

Figure 12: Attack success rate under the unseen mesh. We perform single view training (0 degree) and test under 21 different views. Compare to Figure 9, we test our result over a new mesh which is not seen during training process.

4.3.6 Attack to unseen detector

To test our generalizability across detectors, we choose Faster-RCNN (Faster Region-based Convolutional Neural Networks in

[21]) and SSD(Single Shot Detector in [15]) as our targets. We train our adversarial logos on YOLOV2 under degree of views and feed test images into two unseen detectors under degree views. We achieve 42% average attack success rate over SSD while the number is 46% over Faster-RCNN. Successful and failure examples for two detectors are shown in Figure 13, respectively. Note that in [25], the transferability of 2D adversarial patch attacks to unseen detectors is often in jeopardy. In comparison, our 3D adversarial logo appears to transfer better across unseen detectors, despite not being specifically optimized.

Figure 13: Examples of our 3D adversarial logos in unseen detectors. Left: attack in Faster-RCNN; Right: attack in SSD. The first eight images in red box is success cases while two right-end images in green box are failure cases.

5 Conclusion

We have presented our novel 3D adversarial logo attack on human meshes. We start from the logo texture with designated shape and diverge to 2D adversarial logos that are naturally rendered from 3D adversarial logos on human meshes. Due to differentiable renderer, the update back to the logo texture image is shape-free, mesh-free, and angle-free, leading to a stable attack success rate under different angle views with different human models and logo shapes. Our method enables a fashion-designed potential in the realistic adversarial attack. In the future, we hope to justify the feasibility of our method in the physical-world by examining the printability of our adversarial logos.


  • [1] A. Azulay and Y. Weiss (2018) Why do deep convolutional networks generalize so poorly to small image transformations?. arXiv preprint arXiv:1805.12177. Cited by: §1, §4.3.2.
  • [2] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer (2017) Adversarial patch. arXiv preprint arXiv:1712.09665. Cited by: §1.
  • [3] S. Chen, C. Cornelius, J. Martin, and D. H. P. Chau (2018) Shapeshifter: robust physical adversarial attack on faster r-cnn object detector. In

    Joint European Conference on Machine Learning and Knowledge Discovery in Databases

    pp. 52–68. Cited by: §2.2, §4.3.3.
  • [4] T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, and Z. Wang (2020-06) Adversarial robustness: from self-supervised pre-training to fine-tuning. In

    The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §2.2.
  • [5] L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry (2019) Exploring the landscape of spatial robustness. In International Conference on Machine Learning, pp. 1802–1811. Cited by: §1, §4.3.2.
  • [6] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramer, A. Prakash, T. Kohno, and D. Song (2018) Physical adversarial examples for object detectors. arXiv preprint arXiv:1807.07769. Cited by: §3.3, §4.3.3.
  • [7] R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §2.2.
  • [8] I. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), Cited by: §2.2.
  • [9] S. Gui, H. Wang, H. Yang, C. Yu, Z. Wang, and J. Liu (2019) Model compression with adversarial robustness: a unified optimization framework. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Cited by: §2.2.
  • [10] T. Hu, T. Chen, H. Wang, and Z. Wang (2020) Triple wins: boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In ICLR, Cited by: §2.2.
  • [11] L. Huang, C. Gao, Y. Zhou, C. Xie, A. L. Yuille, C. Zou, and N. Liu (2020) Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 720–729. Cited by: §2.2.
  • [12] H. Kato, Y. Ushiku, and T. Harada (2018) Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: §2.1.
  • [13] H. Kato, Y. Ushiku, and T. Harada (2018) Neural 3d mesh renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.2, §3.2.
  • [14] H. D. Liu, M. Tao, C. Li, D. Nowrouzezahrai, and A. Jacobson (2018) Beyond pixel norm-balls: parametric adversaries using an analytically differentiable renderer. arXiv preprint arXiv:1808.02651. Cited by: §2.3.
  • [15] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) SSD: single shot multibox detector. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 21–37. External Links: ISBN 978-3-319-46448-0 Cited by: §4.3.6.
  • [16] J. Lu, H. Sibai, E. Fabry, and D. Forsyth (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501. Cited by: §2.2.
  • [17] T. H. Nguyen-Phuoc, C. Li, S. Balaban, and Y. Yang (2018) Rendernet: a deep convolutional network for differentiable rendering from 3d shapes. In Advances in Neural Information Processing Systems, pp. 7891–7901. Cited by: §2.1.
  • [18] A. Raj, C. Ham, C. Barnes, V. Kim, J. Lu, and J. Hays (2019) Learning to generate textures on 3d meshes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 32–38. Cited by: §2.1.
  • [19] J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §2.2, §4.2.
  • [20] J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §4.2.
  • [21] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 91–99. Cited by: §4.3.6.
  • [22] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter (2016)

    Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition

    In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. Cited by: §3.3.
  • [23] H. Su, C. R. Qi, Y. Li, and L. J. Guibas (2015) Render for cnn: viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2686–2694. Cited by: §2.1.
  • [24] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), Cited by: §2.2.
  • [25] S. Thys, W. Van Ranst, and T. Goedemé (2019) Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §1, §2.2, §4.3.3, §4.3.3, §4.3.6.
  • [26] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel (2017) Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204. Cited by: §1.
  • [27] T. Tsai, K. Yang, T. Ho, and Y. Jin (2020) Robust adversarial objects against deep learning models. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    Vol. 34, pp. 954–962. Cited by: §2.3.
  • [28] R. R. Wiyatno and A. Xu (2019) Physical adversarial textures that fool visual object tracking. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4822–4831. Cited by: §2.2.
  • [29] Z. Wu, S. Lim, L. Davis, and T. Goldstein (2019) Making an invisibility cloak: real world adversarial attacks on object detectors. arXiv preprint arXiv:1910.14667. Cited by: §4.3.3.
  • [30] C. Xiao, D. Yang, B. Li, J. Deng, and M. Liu (2019) Meshadv: adversarial meshes for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6898–6907. Cited by: §1, §2.3.
  • [31] K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P. Chen, Y. Wang, and X. Lin (2019) Evading real-time person detectors by adversarial t-shirt. arXiv preprint arXiv:1910.11099. Cited by: §2.2.
  • [32] X. Zeng, C. Liu, Y. Wang, W. Qiu, L. Xie, Y. Tai, C. Tang, and A. L. Yuille (2019) Adversarial attacks beyond the image space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4302–4311. Cited by: §2.3.
  • [33] R. Zhang (2019-09–15 Jun) Making convolutional networks shift-invariant again. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 7324–7334. External Links: Link Cited by: §1, §4.3.2.
  • [34] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017)

    Places: a 10 million image database for scene recognition

    IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §4.1.
  • [35] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2.1.