On Physical Adversarial Patches for Object Detection

06/20/2019 ∙ by Mark Lee, et al. ∙ 0

In this paper, we demonstrate a physical adversarial patch attack against object detectors, notably the YOLOv3 detector. Unlike previous work on physical object detection attacks, which required the patch to overlap with the objects being misclassified or avoiding detection, we show that a properly designed patch can suppress virtually all the detected objects in the image. That is, we can place the patch anywhere in the image, causing all existing objects in the image to be missed entirely by the detector, even those far away from the patch itself. This in turn opens up new lines of physical attacks against object detection systems, which require no modification of the objects in a scene. A demo of the system can be found at https://youtu.be/WXnQjbZ1e7Y.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper considers the creation of adversarial patches against object detection systems. Broadly, adversarial patch attacks refer to a class of attacks on machine learning systems that add some “patch” or perturbation to the image causing the system to mislabel the image. Unlike traditional adversarial examples, they are not imperceptible, but modify the image in a way that should not change the underlying output according to human intuition. Past work has demonstrated the feasibility of these attacks (including in physical settings) in the context of classification (Brown et al., 2017; Kurakin et al., 2018) and object detection (Eykholt et al., 2018; Xie et al., 2017; Thys et al., 2019). However, in the object detection setting, these attacks have required a user to manipulate the object being attacked itself, i.e., by placing the patch over the object.

We present an alternative (and we believe, stronger) adversarial patch attack against object detection. Specifically, we construct a physical adversarial patch that, when placed in a image, suppresses all objects previously detected in the image, even those that are relatively far away from the patch. The techniques we use to design the patch are relatively straightforward applications of existing techniques: projected gradient descent approaches (Kurakin et al., 2017; Madry et al., 2017) followed by expectation over transformations (Athalye et al., 2018), specifically optimizing a loss that we believe to be well-suited to object detection systems.

We demonstrate our attack on the YOLOv3 architecture, robustly suppressing detections over a wide range of positions for the object. We illustrate the power of the method both on the COCO dataset, where we evaluate the mAP of the system after the attack, and with a physical attack against YOLOv3 running in real-time on webcam input. The possibility of such attacks opens up new threat vectors for many machine learning systems. For example, it suggests it would be possible to suppress the detection of

all objects for an autonomous car’s vision system (e.g. pedestrians, other cars, street signs), not by requiring us to manipulate each object, but just by placing a well-crafted sign on the sidewalk.

2 Related Work

The field of adversarial attacks against machine learning systems is broad enough at this point that we focus here only on the related work most closely related to our approach.

2.1 Adversarial Patch for Classification

Adversarial patch attacks were first introduced by (Brown et al., 2017)

for image classifiers. The goal is to produce localized, robust, and universal perturbations that are applied to an image by masking instead of adding pixels. The patch found by

(Brown et al., 2017)

is able to fool multiple ImageNet models into predicting “toaster” whenever the patch is in view, even in physical space as a printed sticker. However, because classification systems only classify each image as a single class, to some extent this attack relies on the fact that it can simply place a high-confidence “deep net toaster” into an image (even if it does not look like a toaster to humans) and override other classes in the image.

2.2 Adversarial Patches for Image Segmentation

Because of the limitations of the classification setting, several other works have investigated the use of adversarial patches in the object detection setting (Sharif et al., 2016; Eykholt et al., 2018; Sharif et al., 2018; Thys et al., 2019; Chen et al., 2018; Bose & Aarabi, 2018). However, for the few cases in this domain dealing with physical adversarial examples, virtually all focused on the creation of an object that overlaps the object of interest, to either change its class or suppress detection. In contrast, our approach looks specifically at adversarial patches that do not overlap the objects of interest in the scene.

The work that bears the most similarity to our own is the DPatch method (Liu et al., 2018), which explicitly creates patches that do not overlap with the objects of interest. However, the DPatch method was only tested on digital images, and contains a substantial flaw that makes it unsuitable for real experiments: the patches produced in the DPatch work are never clipped to the allowable image range (i.e., clipping colors to the range) and thus do not correspond to actual perturbed images. Furthermore, it is not trivial to use the DPatch loss to obtain valid adversarial images: we compare this approach to our own and show that we are able to generate substantially stronger attacks.

2.3 Yolo

YOLO is a “one-shot” object detector with state-of-the-art performance on certain metrics running up to faster than other models (Redmon & Farhadi, 2018). It treats the input image as an grid, each cell predicting bounding boxes and their confidence scores; and each box predicting

class probabilities, conditioned on there being an object in the box. We specifically use the YOLOv3 model as the object detection system we use for our demonstrations, though other object detectors would be possible as well.

3 Methodology

3.1 Notation

Let denote a hypothesis function with parameters defining the model (layers, weights, etc); denote some input to with a corresponding target of ; and

denote a loss function mapping predictions made by the hypothesis

on input and the target to some real-valued number.

3.2 Attack Formulation

Here we present our methodology for creating adversarial patches for object detection. Note that the methods here are based upon existing work: specifically untargeted PGD with expection over transformation, but the results suggest that these attacks are surprisingly stronger than previously thought. We consider the following mathematical formulation of finding an adversarial patch:

where is a distribution over samples, is a distribution over patch transformations (to be discussed shortly), and is a “patch application function” that transforms the patch with and applies the result to the image by masking the appropriate pixels. Note that the maximization over is done outside the expectation, i.e., we are considering a class of “universal” adversarial perturbations.

The DPatch method attempts to solve a similar objective by minimizing the loss for a carefully crafted target as described in (Liu et al., 2018), performing the update:

(1)

While this update works fairly well for fitting patches in the digital space, our experiments show that patches found in this way are weakly adversarial when a box-constraint is applied, requiring many update iterations and consistently plateauing at a relatively high mAP (see Figure 3). Reasons we believe DPatch fails are elaborated in subsection 3.7.

Instead, we adopt a simpler approach and simply take the optimization problem at face value and maximize the loss for the original targets directly for samples and transformations drawn from and respectively. This is essentially just the standard untargeted PGD approach (Madry et al., 2017), originally introduced as the Basic Iterative Method (Kurakin et al., 2017), with expectation over transformation (Athalye et al., 2018) applied to the patch itself. The update does not push the patch towards any particular target label or bounding box. This contrasts with the DPatch update in Equation 1 which requires a target label in for both the untargeted and targeted cases; this is generally a non-issue as our goal is to suppress detections. Also following past work, we consider a normalized steepest ascent method under the norm, which results in the update

(2)

for a sample and transformation .

3.3 Experimental Setup

We evaluate on YOLOv3 pretrained for COCO 2014 (Lin et al., 2014) ( pixels). The implementation of YOLOv3 achieves around mAP-50 (mAP at IOU metric) using an object-confidence threshold of for non-max suppression. Because mAP is considerably influenced by this threshold, we also evaluate at the confidence threshold used during validation, as well as the confidence threshold used by default for real-time detection. The implementation achieves mAP-50 at the confidence threshold and at .

We define a “step” as iterations. The following experiments were run for steps with an initial learning rate of and momentum of

, which were chosen heuristically. Learning rate was decayed by

every steps, at which point we also run one validation step for the mAP-50 plots. Because the loss functions are highly non-convex, we take the best of random restarts to mitigate the effects of local optima. Where applicable, patch transformations involved randomly rotating around the axes, randomly scaling and translating, and randomly adjusting the brightness of the patch (converting to HSV and scaling V).

3.4 Unclipped Attack

For the unclipped attack, our method performs the update in Equation 2, except without clipping. The purpose is to benchmark against DPatch which uses Equation 1. For both methods, scales the patch to a fixed pixels and positions at top-left of the image (as in (Liu et al., 2018)).

(a) Training Loss (Ours)
(b) mAP-50 (Ours)
(c) Training Loss (DPatch)
(d) mAP-50 (DPatch)
Figure 1: Our method vs. DPatch for the unclipped case.

Figure 1 shows that our method achieves approximately mAP after only steps, whereas DPatch converges to roughly mAP after steps. From our experiments, lowering the learning rate or decaying more aggressively does not help to decrease the DPatch mAP, perhaps indicating a limitation in the loss function itself.

center Method Conf. mAP (%) Smallest AP (%) Largest AP (%) Baseline 0.001 55.4 10.23 (Hair Drier) 87.61 (Giraffe) Baseline 0.1 50.3 3.03 (Toaster) 82.13 (Giraffe) Baseline 0.5 40.9 0 (Toaster) 79.53 (Giraffe) DPatch 0.001 9.21 0.17 (Traffic Light) 26.07 (Clock) DPatch 0.1 7.23 0 (Toaster) 19.83 (Mouse) DPatch 0.5 4.88 0 (Toaster) 15.83 (Microwave) Ours 0.001 0.25 0 (Aeroplane) 2.2 (Sports Ball) Ours 0.1 0.1 0 (Aeroplane) 1.2 (Knife) Ours 0.5 0.05 0 (Bicycle) 0.76 (Aeroplane)

Table 1: Summary for the unclipped case. “Baseline” is no patch.

Table 1 shows the overall mAP as well as smallest and largest per-class APs for various confidence thresholds. These values were obtained by evaluating on the entire validation set instead of just one “step”. Our DPatch results are mostly consistent with (Liu et al., 2018) which reports mAP for the untargeted attack on YOLOv2 and Pascal VOC 2007 – deviations are expected due to differences in implementation, model architecture and dataset.

Figure 2: ROI plots for the unclipped case. Top row shows original.

To verify that our patch attacks at the bounding box proposal level, we plot the pre-non-max suppression bounding box confidence scores for a random image, shown in Figure 2.

3.5 Clipped Attack

For the clipped attack, our method performs the update from Equation 2. We compare with DPatch, which uses Equation 1 but modified to clip the patch to .

Figure 3 shows the loss and mAP plots for a clipped patch with all transforms as described in subsection 3.3. Specifically, we randomly rotated for and for ; scaled between to pixels; and adjusted brightness by factor from . Translations were sampled post-scaling such that the patch could appear in any location in the image, and scale was adjusted to ensure the patch is not “cut off” after rotation.

(a) Training Loss (Ours)
(b) mAP-50 (Ours)
(c) Training Loss (DPatch)
(d) mAP-50 (DPatch)
Figure 3: Our method vs. DPatch for the clipped case.

The DPatch method quickly converges to a patch that is weakly adversarial, whereas our method achieves single-digit mAP values. As in the unclipped case, random restarts and hyperparameter tuning do not appear to help DPatch improve significantly.

Table 2 shows the AP breakdown as evaluated on the entire validation set, this time applying the patch at random locations in the image.

Our patch achieves as low as mAP, almost comparable to an unclipped DPatch. The clipped DPatch is only marginally better than a random image. Our patch also uniquely captures semantically meaningful patterns (zebra stripes) that are most salient to the detector. Like the unclipped case, Figure 5 shows that our patch successfully attracts most of the region proposals.

center Method Conf. mAP Smallest AP (%) Largest AP (%) Baseline 0.001 55.4 10.23 (Hair Drier) 87.61 (Giraffe) Baseline 0.1 50.3 3.03 (Toaster) 82.13 (Giraffe) Baseline 0.5 40.9 0 (Toaster) 79.53 (Giraffe) DPatch 0.001 39.6 9.09 (Hair Drier) 69.03 (Train) DPatch 0.1 34.7 0 (Toaster) 66.3 (Bus) DPatch 0.5 26.8 0 (Toaster) 58.1 (Bus) Ours 0.001 13.8 1.27 (Zebra) 34.69 (Car) Ours 0.1 10.4 0 (Toaster) 30.18 (Car) Ours 0.5 7.2 0 (Hot Dog) 23.06 (Person)

Table 2: Summary for the clipped case. “Baseline” is no patch.
(a) Ours
(b) DPatch
Figure 4: Comparison of patches.
Figure 5: ROI plots for the clipped case. Top row shows original.

3.6 Physical Attack

Figure 6 shows a printed version of our patch attacking YOLOv3 running real-time with a standard webcam. The patch was printed on regular printer paper and recorded under natural lighting. While the patch is somewhat invariant to location, the patch generally has weaker influence on objects that are farther away, as seen in Figure 7 – when positioned at the sides, the patch needs to be enlarged to successfully disable distant detections, and fails to disable sufficiently confident ones. However, the patch is able to disable detections that are moving, so long as the patch itself is stable, as shown in Figure 8. This shows that our patch works on a data distribution different from the training distribution, and is generally adversarial over different lighting conditions, positions and orientations.

Figure 6: Physical attack using our patch.
Figure 7: Location invariance of our patch in physical space.
Figure 8: Moving object suppression in physical space.

3.7 Discussion

We suspect DPatch struggles because it centralizes all ground truth boxes around the patch – it ultimately resides in a single cell, meaning the loss is dominated by the proposal “responsible” for that cell. As long as the patch is recognized, the model incurs little penalty for predicting all the other objects, perhaps suffering penalty on the objectness scores but not on bounding boxes or class labels. The loss can there be reduced even if the model behavior does not change much. And in practice, the patch is often detected with high confidence without suppressing other detections. In our method, every grid cell overlapped by a ground truth box contributes to the loss, which increases the most when the model fails to predict any ground truth box.

4 Conclusion

We introduce a patch attack causing YOLOv3 to drop from to single digit mAP. We show that this method outperforms the existing DPatch method in the untargeted case, which generally has equivalently significant implications as a targeted attack. Finally, we demonstrate that our attack extends to the physical space by printing our patch and fooling YOLOv3 running real-time via webcam feed, which to our knowledge is the first demonstration of a patch attack on object detectors that successfully suppresses detections without having to overlap the patch and the target objects.

References