Deep neural networks are notoriously vulnerable: to human-imperceivable perturbations or doctoring of images, resulting in the trained algorithms, drastically changing their recognition and predictions. To test the mis-recognition or mis-detection vulnerability,
propose 2D adversarial attacks, manipulating pixels on the image while maintaining overall visual fidelity. This negligible perturbation to human eyes causes drastically false conclusions with high confidence by trained deep neural networks. Numerous adversarial attacks have been designed and tested on deep learning tasks such as image classification and object detection. Among extensive efforts, the focus recently has shifted to only structurally editing certain local areas on an image, known aspatch adversarial attacks . Thys et al. propose a pipeline to generate a 2D adversarial patch and attach it to image pixels of humans appearing in 2D images. In principle, a person with this 2D adversarial patch will fool or become “invisible” from deep learned human image detectors. However, such 2D image adversarial patches are often not robust to image transformations, and especially under multi-view 2D image synthesis in reconstructed 3D computer graphics settings. Examining 2D image renderings from 3D scene models using various possible human postures and different angle-view of humans, the 2D attack can easily lose its own strength under such 3D viewing transformations. Moreover, while square or rectangular adversarial patches are typically under consideration, more shape variations and their implications for the attack performance have rarely been discussed before.
Can we naturally stitch a patch onto human clothes to make the adversarial attack more versatile and realistic? The defect in pure 2D scenarios leads us to consider the 3D adversarial attack, where we view a person as a 3D object instead of its 2D projection. As an example, the domain of mesh adversarial attack  refers to deformations in the mesh’s shape and texture level to fulfill the attack goal. However, these 3D adversarial attacks were not yet justified the concept of patch adversarial attack; they view the entire texture and geometric information of 3D meshes as attackable. Moreover, a noticeable branch of researches shows that 2D images with infinitesimal rotation and shift may cause huge perturbation in predictions [33, 1, 5], no matter how negligible to human eyes. What if the perturbation does not come from 2D scenarios and conditions (, 2D rotation and translation), but rather results from physical world change, like 3D view rotations and body postures changes? Furthermore, effective attacks on certain meshes do not imply a generalized effectiveness among other meshes, , the attack can be failed when changing to a different clothes mesh. Those downsides motivate us to develop more generalizable 3D adversarial patches.
The primary aim of this work is to generate a structured patch in an arbitrary shape (called a “logo” by us), termed as a 3D adversarial logo that, when appended to a 3D human mesh, then rendered into 2D images, can consistently fool the object detector under different human postures . A 3D adversarial logo is defined over a subregion on 3D mesh that can alter the textures and position. Then 3D human meshes along with 3D adversarial logos are rendered on top of real-life background images. The specific contributions of our work are highlighted as:
We propose a logo transformation pipeline to map an arbitrary 2D shape (“logo”) into a mesh to form the proposed 3D adversarial logos. Moreover, the 3D adversarial logo is updated when the loss is propagated back from the 2D adversarial logo, and eventually propagated to the texture image. The pipeline can be easily extended to multiple-mesh joint training.
We propose a general 3D-to-2D adversarial attack protocol via physical rendering equipped with differentiability. We render 3D meshes, with the 3D adversarial logo attached on, into 2D scenarios and synthesize images that could fool the detector. The shape of our 3D adversarial logo comes from the selected logo texture in the 2D domain. Hence, we can perform versatile adversarial training with shape and position controlled.
We justify that our model can adapt to multi-angle scenarios with much richer variations than what can be depicted by 2D perturbations, taking one important step towards studying the physical world fragility of deep networks.
2 Related Work
2.1 Differentiable Mesh
Various tasks, including depth estimation as well as 3D reconstruction from 2D images, have been explored with deep neural networks and witnessed successes. Less considered is the reverse problem: How can we render the 3D model back to 2D images to fulfill desired tasks?
Discrete operations in the two most popular rendering methods (ray-tracing and rasterization) hamper the differentiability. To fill in the gap, numerous approaches have been proposed to edit mesh texture via gradient descent, which provides the ground to combine traditional graphical renderer with neural networks. Nguyen-Phuoc et al.  propose a CNN architecture leveraging a projection unit to render a voxel-based 3D object into 2D images. Unlike the voxel-based method, Kato et al. 
adopt linear-gradient interpolation to overcome vanishing gradients in rasterization-based rendering. Raj et al. generate textures for 3D mesh through photo-realistic pictures. They then apply RenderForCNN  to sample the viewpoints that match the ones of input images, followed by adapting CycleGAN  to generate textures for 2.5D information rendered in the generated multi-viewpoints, and eventually merge these textures into a single texture to render the object into the 2D world.
2.2 Adversarial Patch in 2D Images
Adversarial attacks [24, 8, 10, 4, 9] are proposed to analyze the robustness of CNNs, and recently are increasingly studied in object detection tasks, in the form of adversarial patches. For example,  provides a stop sign attack to Fast-RCNN , and  is fooling the YOLOv2  object detector through pixel-wise patch optimization. The target patch with simple 2D transformations (such as rotation and scaling) is applied to a near-human region in 2D real photos and then trained to fool with the object detector. To demonstrate realistic adversarial attacks, they physically let a person hold the 3D-printed patch and verify them to ”disappear” in the object detector. Nevertheless, such attacks are easily broken w.r.t. real-world 3D variations as pointed out by . Wiyatno et al.  propose to generate physical adversarial texture as a patch in backgrounds. Their method allows the patch ”rotated” in 3D space and then added back to 2D space. Xu et al.  discusses how to incorporate physical deformation of T-shirts into patch adversarial attacks, leading a forward step yet only in a fixed camera view. A recent work by Huang et al.  attacks region proposal network (RPN) by synthesizing semantic patches that naturally anchored onto human cloth in the digital space. They test the garment in the physical world with motions and justify their result in digital space.
2.3 Mesh Adversarial Attack
A 2D object can be considered as a projection from the 3D model. Therefore, attacking from 3D space and then map to 2D space can be seen as a way of augmenting perturbation space. In recent two years, different adversarial attacks scheme over 3D meshes have been proposed. For instance, Tsai et al. 
perturb the position of the point cloud to generate an adversarial mesh that fools 3D shape classifiers. Ti et al. alter lighting and geometry information of a physical model, to generate adversarial attacks, by modeling the pixel in natural images as an interaction result of lighting condition and physical scene, such that the pixel can maintain its natural appearance. More recently, Xiao et al.  and Zeng et al.  generate adversarial samples by altering the physical parameters ( illumination) of rendering results from target objects. They generate meshes with negligible perturbations to the texture and show that under certain rendering assumptions( fixed camera view), the adversarial mesh can remain to deceive state-of-the-art classifiers and detectors. Overall, most existing works perturb the image global texture , while the idea of generating an adversarial sub-region/patch remains unexplored in the 3D mesh domain.
3 The Proposed Framework
In this section, we seek a concrete solution to the 3d adversarial logo attack, with the following goals in mind:
The adversarial training is differentiable: we can modify the source logo via end-to-end loss back-propagation. The major challenge is to replace a traditional discrete render into a differentiable one and to update corresponding texture maps over each mesh.
The 3D adversarial logo is universal: for every distinct human mesh, we hope to stitch the 3D adversarial logo generates from the identical 2D logo texture. During both training and testing, we only modify the rendered texture of that logo to have it change concurrently with the human mesh textures.
Our 3D adversarial logo attack pipeline is outlined in Figure 2. In the training process, we first define a target logo texture with given shape in the 2D domain (, the character “H”), and perturb the logo texture by random noise, contrast, and brightness. Then we map the logo texture to 3D surfaces to form 3D adversarial logos on different meshes. Then each human mesh and its 3D adversarial logo are together rendered into a 2D person image associated with its 2D adversarial logo***We refer to the 2D adversarial logo as the 2D image counterpart that the 3D adversarial logo is rendered into.. These images of person with a logo are further synthesized with background images, after which we stream these synthesized images into the object detector for adversarial training.
Due to the end-to-end differentiability, the training process updates the 3D adversarial logo via back-propagation, and further be back-propagated to the logo texture. Within one epoch the above process will be conducted on all training meshes until all background images are trained with, therefore ensuring the logo’s universal applicability to different meshes.
3.1 Logo Transformation
Different from existing 3D attacks, our network aims at a universal patch attack across different input human meshes. This is achieved by editing the 3D adversarial logo over multiple meshes concurrently. Due to discrete polygonal mesh settings, as well as the high possible degrees of distortions and deformations in different 3D meshes, training one universal adversarial logo is highly challenging. We illustrate our logo transformation strategy below, which offers one explicit construction of textures coordinate map for each 3D logo to generate our 3D adversarial logo. Detailed implementations are included in supplementary materials.
Given the logo texture define as an RGB image, in order to convert it into a 3D logo that can be edited on a single human mesh, our proposed logo transformation comprises of two basic operations:
2D Mapping (): Project a 3D logo surface onto the 2D domain to generate texture coordinate mapping.
3D Mapping (): Extract color information from the logo texture and map color information onto each face over a 3D logo to composite a 3D adversarial logo.
The overall logo transformation can be denoted as:
With the logo transformation, the chosen 2D logo shape is mapped to 3D logo on each distinct human mesh to form the 3D adversarial logo . By leveraging a differentiable renderer (to be discussed in Section 3.2), when rendering the 3D adversarial logos into adversarial logo images, the updates will be back-propagated from those images to 3D adversarial logos and thereby all the update information are aggregated to the logo texture. In our work, the logo transformation is only constructed once and we fix the texture coordinate map for each human mesh. When forwarding to the detector, the color of our 3D adversarial logos is obtained via distinctive texture coordinate maps and hence we can update all 3D adversarial logos based on logo texture in synchronization.
3.2 Differentiable Renderer
A differentiable renderer can take meshes as input and update the mesh’s texture via back-propagation for different purposes. Our work is built upon a specific renderer called Neural 3D Mesh Renderer , while any other differentiable renderer shall serve our goal here.
The Neural Renderer proposed in  generates color cubes for each face over the mesh. By adopting an approximate rasterized gradient, where piece-wise constant functions are approximated via linear interpolations, as well as centroid color sampling, the renderer is capable of editing mesh’s texture through back-propagation. We apply the Neural Renderer to render the 3D logo output from the logo transformation, into various 2D adversarial logos, and meanwhile, render the 3D human mesh. The last step is to attach an (augmented) 2D adversarial logo to the corresponding rendered 2D person image, and then synthesize with a real background image, yielding the final 2D adversarial image that aims to fool the object detector. During back-propagation, the update in 2D adversarial logo images will be fed back to the 3D adversarial logo, and eventually back to the initial input of logo texture, thanks to the renderer’s differentiability.
3.3 Training Against 3D Adversarial Logo Attacks
The aim of our work is to generate a 3D adversarial logo on human mesh and the human mesh with this 3D adversarial logo can fool the object detector when it is rendered into a 2D image. We next discuss how we compose the training loss to achieve this goal.
To fool an object detector is to diminish the confidence within bounding boxes that contain the target object. Such that it cannot be detected. We exploit the disappearance loss , which takes the maximum confidence of all bounding boxes that contain the target object as the loss:
where is the image streamed into the object detector and is the object class label, is the object detector that outputs bounding box predictions, and calculate the maximum confidence among all the bounding boxes.
Total Variance Loss
To further smooth the predictions over augmented 2D adversarial logos and avoid inconsistent predictions, a total variance loss is enforced.
where is denoted as the differentiable renderer in Section 3.2. We apply such a notation to emphasize the loss is computed over pixel values of 2D adversarial logos, and is one specific pixel value at coordinate . This loss is added to improve physical realizability.
The overall training loss we are minimizing is composed of the above two losses ( and are the hyper-parameters):
4 Experiments and Results
4.1 Dataset Preparation
Generating Multiple Angle views
We generate 2D adversarial images via the Neural Renderer under varying angle views. Specifically, we pick up one specific angle view as our benchmark view of degree. We fix the rendering camera view and rotate the 3D human model. We denote “” for counterclockwise rotation while “” for clockwise rotation. Figure 5 presents an example of our angle view settings. When synthesized with background images, we always assume the fixed camera view captures the background, so there is no further cropping, translation nor rotation operation for background images.
For training backgrounds, we crawl a set of real photos from Google search, using keywords street, avenue, park, and lawn. We manually inspect all photos and discard those that contain a person or other object distractors. We end up with 312 “clean background”-style images that are used for training. There are multiple synthesized images generated per background image, depending on how many angle-views we sample. For most of our experiments (except for single-angle training, see Section 4.3.1), we sample over 21 views, leading to a training set of size 6,741. We scale up testing background images by sampling over images from MIT places dataset  with the same criterion, generating 32,000 “clean background“-style images for testing.
Mesh Model Data
For the mesh data, we select three publicly available 3D human models†††https://www.turbosquid.com/3d-models/water-park-slides-3d-max/1093267 and https://renderpeople.com/free-3d-people/ contain the source mesh. with complete textures and coordinate maps. We edit each mesh to select faces that form our 3D logos under given 2D shape contours. Afterward, we process every 3D logo via OpenMesh 8.0 ‡‡‡https://www.openmesh.org/ to extract its centroid coordinates for our logo transformation step.
4.2 Implementation Details
All experiments are implemented in PyTorch 1.0.0, along with the Neural Renderer PyTorch implementation§§§https://github.com/daniilidis-group/neural_renderer
. We choose the default hyperparameters in the Neural Renderer as: elevation is 0, camera distance is 2.0, ambient light is 1.0, light direction is 0.0 and the cubic sizein Section 3.2 is 4. For data augmentation, we add the contrast uniformly generated from 0 to 1, the brightness uniformly generated from 0 to 1, and the noise uniformly generated from to . All three are added pixelwise. The training is conducted on one Nvidia GTX 1080TI GPU. The default optimizer is Adam, and the learning rate is initialized as 0.03 and decays by a factor of 0.1 every 50 epochs.
During all experiments, and in (4) respectively. The batch size of background images is set as 8, while the batch size of synthesizing rendering meshes is set to 1. The total training epochs in the single-angle training and multiple-angle training (explained in Section 4.3.1) are 100 and 20 respectively.
4.3 Experiment Results and Analysis
4.3.1 Single- and Multi-angle Evaluation
We first apply our 3D adversarial attack pipeline over single-angle rendering images as single-angle training. We illustrate the main idea using degree, for example. Given a mesh, we synthesize our 2D images with background images in the training set under the degree angle view. Then we train over those images to obtain our adversarial logos and logo texture. The testing images are rendered under the same degree view and synthesized with test backgrounds. The attack success rate denotes the ratio of testing samples that that target detector misses the person. Table 1’s first column compares the results: as a proof-of-concept, our proposed attack successfully compromises both YoloV2 and YoloV3 (appear to be relatively more robust), under both “H” and “G” logo shapes.
We then extend to a joint 3D view training setting called multi-angle training. We uniformly sample the degrees between [-10, 10] with one-degree increment, leading to 21 discrete rendering views. Under this setting, both the training set and test set are enlarged by a multiplier of . We compute our multi-angle success rate by considering averaging the success rates across all views. Results are summarized in the second column of Table 1. As can be seen in table 1, a lower success rate implies that the multi-angle attack is more challenging compared to the single-angle attack, where no perturbation is considered.
The numbers we report in Table 1 are consistent with our visual results. Some selective images of our multi-angle training are shown in Figure 4. As one could observe, our adversarial logo can mislead the pre-trained object detector and make the person unrecognizable under both logo “G” and logo “H”.
|Object Detector||Attack Success Rate|
|single angle view||multiple angle views|
4.3.2 Attack to unseen angle views
Single-angle training with Multi-angle attack
To prove our method is robust against 3D rotations, we conduct a multi-angle attack with single-angle training. We first train at degree, but use views ( degree) to attack the detector. Results shown in Figure 9(b) proves our method is stable against small model angle perturbations. Figure 6 provides an example where our 3D adversarial logo hides the person from the detector. Though indistinguishable from human eyes, tiny perturbations in images cannot be underestimated in adversarial attacks [33, 1, 5]. Nevertheless, our method is not affected by pixel-level changes that could collapse other 2D patch adversarial works.
Multi-angle training with unseen-angle attack
We extend our experiments to test robustness under more angle views. After multi-angle training with views in degree, we attack the detector using all angle views in degree. Figure 7 is plotted based on our success attack rate overall test images and we provide examples of unseen view attacks in Figure 8. The plot in Figure 7 reveals the limitations of our works. When our rendering view is away from training views, we observe a decaying curve that converges to eventually. One plausible explanation is that clipping of our adversarial logos in rendering pipeline leads to loss of information from our logo textures, thereby severely affects our performance.
4.3.3 Comparison with adversarial patch attacks
For comparison fairness with previous adversarial patch attacks (cf. [25, 3, 6, 29]), we perform single-angle training in our pipeline and conduct conventional 2D patch adversarial attack  as follows: i) We apply masks to generate 2D patches that obtain identical shape as our logo texture. ii) Different from our 3D adversarial attack, We add 2D perturbations (translation and rotation) to optimize the performance of 2D patch adversarial attacks. iii) The comparison is to perform the same testing setup, but replace the logo trained in our proposed method with the masked patch trained via state-of-the-art. In other words, the major difference is whether 3D perturbations are considered during training.
We compare the performance of two schemes under the multi-angle attack. We synthesize our test images with test background images under different views ( degree), by applying the identical rendering scheme for both 2D and 3D methods (by replacing logo texture with 2D adversarial patch when performing 2D patch multi-angle testing). Only by this procedure can we ensure the rendered logos share the same positions and same perturbation under model angle changes. We compare attack success rate (defined in Section 4.3.1) between two methods and report the result in Figure 9. We observe that our method outperforms the conventional 2D patch adversarial attack scheme. Even at the training view (0 degree), the 2D method fails, due to the distortion when our logo is dropped on clothes via rendering. This experiment indicates our adversarial logos can naturally shift as the physical scene changes. We do not require the patch is either center-aligned or at a fixed position, whereas the method in  obtains a position-sensitive adversarial patch.
4.3.4 Shape adaptivity
Although our 3D adversarial logo attack is not restricted to the shape of the logo, the result in different shapes reveals that numerical differences originated from different shapes are not negligible, as seen in Table 1 and Figure 10. We select six different shapes (character “G”,“O”,“C”,“X”,“T”,“H”) as contour of our logo texture. The motivation is that the first three characters contain curves and complicated 2D contours while the latter three consist of parallel and regular contours. We believe these characters can cover all common cases when designing a new shape of logo texture. Figure 11 compare their performance when performing multi-angle attack under single-angle training. It can be seen that the more regular and symmetry the shape is, the better attack success rate is attained. However, this does not hold for the logo of the character “T”. A possible explanation is that the symmetry along a horizontal axis might be most crucial to deceive the detector. Both “C” and “G” outperform “T”, making the letter “T” the worst result among all logos.
4.3.5 Attack to unseen human mesh
One goal of our framework is to transfer one logo texture across all 3D logos on distinctive meshes, and only the logo texture will be updated via back-propagation. This setting brings us joint mesh training and promising one-for-all generalization to attack the detector in different meshes. In Figure 12, we justify our joint-training maintains consistent performance over unseen meshes. We use two meshes for training and use the third mesh(The woman) to test our generated adversarial logo performance. The mean attack success rate is %. Although we did not observe the same attack success rate compared to same-mesh training, our method remains its potential to attack in more generalizable cases where logos are shown on different humans.
4.3.6 Attack to unseen detector
To test our generalizability across detectors, we choose Faster-RCNN (Faster Region-based Convolutional Neural Networks in) and SSD(Single Shot Detector in ) as our targets. We train our adversarial logos on YOLOV2 under degree of views and feed test images into two unseen detectors under degree views. We achieve 42% average attack success rate over SSD while the number is 46% over Faster-RCNN. Successful and failure examples for two detectors are shown in Figure 13, respectively. Note that in , the transferability of 2D adversarial patch attacks to unseen detectors is often in jeopardy. In comparison, our 3D adversarial logo appears to transfer better across unseen detectors, despite not being specifically optimized.
We have presented our novel 3D adversarial logo attack on human meshes. We start from the logo texture with designated shape and diverge to 2D adversarial logos that are naturally rendered from 3D adversarial logos on human meshes. Due to differentiable renderer, the update back to the logo texture image is shape-free, mesh-free, and angle-free, leading to a stable attack success rate under different angle views with different human models and logo shapes. Our method enables a fashion-designed potential in the realistic adversarial attack. In the future, we hope to justify the feasibility of our method in the physical-world by examining the printability of our adversarial logos.
-  (2018) Why do deep convolutional networks generalize so poorly to small image transformations?. arXiv preprint arXiv:1805.12177. Cited by: §1, §4.3.2.
-  (2017) Adversarial patch. arXiv preprint arXiv:1712.09665. Cited by: §1.
Shapeshifter: robust physical adversarial attack on faster r-cnn object detector.
Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 52–68. Cited by: §2.2, §4.3.3.
-  (2020-06) Adversarial robustness: from self-supervised pre-training to fine-tuning. In , Cited by: §2.2.
-  (2019) Exploring the landscape of spatial robustness. In International Conference on Machine Learning, pp. 1802–1811. Cited by: §1, §4.3.2.
-  (2018) Physical adversarial examples for object detectors. arXiv preprint arXiv:1807.07769. Cited by: §3.3, §4.3.3.
-  (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §2.2.
-  (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), Cited by: §2.2.
-  (2019) Model compression with adversarial robustness: a unified optimization framework. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Cited by: §2.2.
-  (2020) Triple wins: boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In ICLR, Cited by: §2.2.
-  (2020) Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 720–729. Cited by: §2.2.
-  (2018) Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: §2.1.
-  (2018) Neural 3d mesh renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.2, §3.2.
-  (2018) Beyond pixel norm-balls: parametric adversaries using an analytically differentiable renderer. arXiv preprint arXiv:1808.02651. Cited by: §2.3.
-  (2016) SSD: single shot multibox detector. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 21–37. External Links: Cited by: §4.3.6.
-  (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501. Cited by: §2.2.
-  (2018) Rendernet: a deep convolutional network for differentiable rendering from 3d shapes. In Advances in Neural Information Processing Systems, pp. 7891–7901. Cited by: §2.1.
-  (2019) Learning to generate textures on 3d meshes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 32–38. Cited by: §2.1.
-  (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §2.2, §4.2.
-  (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §4.2.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 91–99. Cited by: §4.3.6.
Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. Cited by: §3.3.
-  (2015) Render for cnn: viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2686–2694. Cited by: §2.1.
-  (2013) Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), Cited by: §2.2.
-  (2019) Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §1, §2.2, §4.3.3, §4.3.3, §4.3.6.
-  (2017) Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204. Cited by: §1.
Robust adversarial objects against deep learning models.
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, pp. 954–962. Cited by: §2.3.
-  (2019) Physical adversarial textures that fool visual object tracking. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4822–4831. Cited by: §2.2.
-  (2019) Making an invisibility cloak: real world adversarial attacks on object detectors. arXiv preprint arXiv:1910.14667. Cited by: §4.3.3.
-  (2019) Meshadv: adversarial meshes for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6898–6907. Cited by: §1, §2.3.
-  (2019) Evading real-time person detectors by adversarial t-shirt. arXiv preprint arXiv:1910.11099. Cited by: §2.2.
-  (2019) Adversarial attacks beyond the image space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4302–4311. Cited by: §2.3.
-  (2019-09–15 Jun) Making convolutional networks shift-invariant again. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 7324–7334. External Links: Cited by: §1, §4.3.2.
Places: a 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §4.1.
Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2.1.