Autonomous Vehicles are expected to significantly reduce accidents , where visual perception systems are in the heart of these vehicles. Despite the notable achievements of DNNs in visual perception, we can easily fool the networks by adversarial examples that are imperceptible to the human eye but cause the network to fail. Adversarial examples are usually created by deliberately employing imperceptibly small perturbations to the benign inputs resulting in incorrect model outputs. This small perturbation is progressively amplified by a deep network and usually yields wrong predictions. Generally speaking, attacks can be a white box or black box depending on the adversary’s knowledge (the agent who creates an adversarial example). White box attacks presume full knowledge of the targeted model’s design, parameters, and, in some cases, training data. Gradients can thus be calculated efficiently in white box attacks using the back-propagation algorithm. In contrast, in Black box attacks, the adversary is unaware of the model parameters and has no access to the gradients. Furthermore, attacks can be targeted or un-targeted based on the intention of the adversary. Targeted attacks try to fool the model into a specific predicted output. In contrast, in the un-targeted attacks, the predicted output itself is irrelevant, and the main goal is to fool the model into any incorrect output.
Fast gradient sign method (FGSM) 
is an example of a simple yet effective attack for generating adversarial instances. FGSM aims to fool the classification of the image by adding a small vector obtained by taking the sign of the gradient of the loss function. Moreover, it was shown that robust 3D adversarial objects could fool deep network classifiers in the physical world, despite the combination of viewpoint shifts, camera noise, and other natural transformations. Fooling surveillance cameras was introduced in  where adversarial patches are designed to attack person detection. Dense Adversary Generation (DAG) algorithm  is an example of generating adversarial attacks for semantic segmentation and object detection tasks. It was discovered that the perturbations are exchangeable across different networks, even though they have different training since they share some intrinsic structure that makes them susceptible to a common source of perturbations. In addition to camera sensors, potential vulnerabilities of LiDAR-based autonomous driving detection systems are explored in . Moreover, the 3D-printed adversarial objects showed effective physical attacks on LiDAR equipped vehicles, raising concerns about autonomous vehicles’ safety. Robust Physical Perturbations (RP)  is another example that generates robust visual adversarial perturbations under different physical conditions on road sign classifications.
On the other hand, Adversarial robustness and defense methods of neural networks have been studied to improve these networks’ resistance to different adversarial attacks. One method for defense is Adversarial training, where adversarial examples besides the clean examples are both used to train the model. Adversarial training can be seen as a sort of simple data augmentation. Despite being simple, but it cannot cover all the attack cases. In , it is demonstrated that JPEG compression can undo the small adversarial perturbations created by the FGSM. However, this method is not effective for large perturbations. Xu et al.  proposed Feature-squeezing for detecting adversarial examples, in which the model is tested on both original input and the input after being pre-processed by feature squeezers such as spatial smoothing. If the difference between the outputs exceeds a certain threshold, we identify the input as an adversarial example. Defense-GAN 
is another defence technique that employs generative adversarial networks (GAN)s, in which it seeks a similar output to a given picture while ignoring adversarial perturbations. It is shown to be a feasible defense that relies on the GAN’s expressiveness and generative power. However, training GANs is still a challenging task.
Robust attacks and defenses are still challenging tasks and an active area of research. Most previous works on adversarial attacks focused on single task scenarios. However, in real-life situations, multi-task learning is adopted to solve several tasks at once . Accordingly, multi-task networks are used to leverage the shared knowledge among tasks, leading to better performance, reduced storage, and faster inference . Moreover, it is shown that when models are trained on multiple tasks at once, they become more robust to adversarial attacks on individual tasks . However, defense remains an open challenge. In this work, as shown in Figure 1, white and black box attacks are applied on a multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection, taking into consideration both targeted and un-targeted scenarios. For the experiment, while attacking one of the tasks, the attacking curve is plotted to inspect the performance across all the tasks over the attacking steps. Additionally, a simple defense method is used across all experiments. Finally, in addition to visual samples of perturbations and performance before and after the attacks, detailed results and comparisons are presented and discussed.
Ii Multi-Task Adversarial Attacks
In this section, the target multi-task network is presented in terms of architecture, data, tasks, and training. Then the attacks are detailed for each task, including a white and black box for both targeted and un-targeted cases.
Ii-a Baseline Multitask model
We derive the baseline model from our recent work OmniDet , a six-task complete perception model for surround view fisheye cameras. We focus on the four main perception tasks and skip visual odometry and the soiling detection task. We provide a short overview of the baseline model used and refer to  for more details. A high-level architecture of the model is shown in Figure 2. It comprises a shared ResNet18 encoder and four decoders for each task. Motion decoder uses additionally previous encoder feature in siamese encoder style. 2D box detection task has the five important objects, namely pedestrians, vehicles, riders, traffic sign, and traffic lights. Segmentation task has vehicles, pedestrians, cyclists, road, lanes, and curbs categories. Motion task has binary segmentation corresponding to static and moving i.e. dynamic pixels. Depth task provides scale-aware distance in 3D space validated by occlusion corrected LiDAR depth . The model is trained jointly using the public WoodScape  dataset comprising 8k samples and evaluated on 2k samples.
In this paragraph, we briefly summarize the loss functions used for training. We construct a self-supervised monocular structure-from-motion (SfM) system for distance and pose estimation. The total loss consists of a photometric term, a smoothness term , that enforces edge-aware smoothness within the distance map , a cross-sequence distance consistency loss , and feature-metric losses from  where and are computed on . Final loss function for distance estimation is weighted average of all these losses.
The segmentation task contains seven classes on the WoodScape and employs Lovasz-Softmax  loss. Motion segmentation employs two frames and predicts either a binary moving or static mask and employs Lovasz-Softmax , and Focal  loss for managing class imbalance instead of the cross-entropy loss. For object detection, we make use of YOLOv3 loss and add IoU loss using segmentation mask .
Ii-B Experimental setup for Attacks
In this section, the details of the experiments are described. We conduct the experiments across the four visual perception tasks, on a test set of 100 images i.e. randomly sampled from the original test set of the target network. We generate the Adversarial examples for each image in the test set while attacking one task at a time. For white box attacks, given the available gradients, we perform an iterative optimization process to add perturbation in the input image in a direction to harm the original predictions. For the black box attacks, we set up similar protocols established in the white-box; however, the gradients are not given but estimated. As a generic black-box optimization algorithm, we show that Evolution Strategies (ES) can be adopted as a black-box optimization method for generating adversarial examples. Precisely, (ES) algorithm is used to update the adversarial example over the attacking steps. At each step, we take the adversarial example vector, i.e
. the image, and generate a population of 25 slightly different vectors by adding noise sampled from a normal distribution. Then, we evaluate each of the individuals by feeding it through the network. Finally, the newly updated vector is the weighted sum of the population vectors. Each weight is proportional to the task’s desired performance, and the process continues till convergence or stops criteria.
In the un-targeted case, the aim is to harm the predictions the most without considering a certain target prediction: , however in the targeted case, the aim is to harm the predictions in a desired specific way towards a certain target: . The attack loss is based on the task. Mean square error (MSE) is used for the distance task, while cross-entropy loss is used for motion and semantic segmentation tasks. For the object detection task, only object confidence is attacked, and hence cross-entropy loss is adopted. Regarding un-targeted attacks across all the tasks, the goal is to maximize the distance between the original output of the network and the adversarial example’s output. Accordingly, we add the perturbations to achieve this simple goal where the output can be anything but the correct one. This can be formulated as where is the image parameters i.e. pixels, the loss functions and
is the learning rate. However, for the targeted attacks, the target output is defined. The aim is to minimize the distance between the original output and the target output according to.
For each perception task, the targets are as follows: Targeted Depth attack tries to convert the predicted near pixels to be predicted as far. The Targeted Segmentation attack tries to convert the predicted vehicle pixels as void for randomly 50% of the test set. For the other 50%, tries to convert the predicted road pixels as void. The Targeted Motion attack tries to convert the predicted dynamic object pixels to be predicted as static. Finally, similar to semantic segmentation, the Targeted Object Detection attack tries to increase or decrease the predicted confidence randomly. In addition to attacks, we apply a simple blurring defense approach across all the attacks. Similar to 
, the intuition is trying to remove the adversarial perturbations and restore the original output as much as possible. The hyperparameters of the attacks are empirically defined based on a very small validation set of three samples. All white-box attacks are conducted with learning rate. In black-box attacks, the hyperparameters are chosen to balance the attack effect and the severity of the perturbations, where the learning rates range from to , and for ES population generation.
In this section, we present and discuss the details of the results. As expected, white-box attacks, with the gradients are accessible, were easier to find than the black box case. White box attacks can generate adversarial examples with minimal and localized perturbations across all the tasks. On the other hand, ES black-box attacks have more significant perturbations and require more hyperparameters to optimize.
The attacking curves for white and black box attacks are shown in Figures 3 and 4 respectively. Each plot shows each perception task’s performance over the 50 attacking steps where the first step at index represents the actual performance of the target network without applying any attack. Each curve shows the mean performance of a task over the test set, where the shaded area is the mean standard deviation. Generally, Motion and detection tasks have a performance with a large standard deviation indicating the test set’s diversity containing easy and hard examples. Across all curves, it is clear that the performance is decreasing along with the attacking steps.
Moreover, attacking one task by generating an adversarial example affects the other tasks’ performance in different ways along the attacking curve. These curves enable the adversary to decide at which step the adversarial example is generated according to the required effect on the target task and the other tasks. As shown in Figures 3 and 4, in most cases, attacking other tasks has a marginal negative effect on motion task. The main reason is that the motion task takes two frames as input, and only one of them is attacked. Moreover, it is shown that the attacking distance task affects both segmentation and detection tasks. Attacking segmentation or detection showed to affect other tasks. As mentioned, the attack effect depends on the parameters selected for the attack. Moreover, targeted attacks try to optimize the adversarial example to produce the required target prediction. In contrast, the un-targeted attack continues to apply perturbation to produce as different as possible predictions.
To understand the effect of applying a defense method on the attacks, Gaussian blurring with radius = is applied to the final adversarial examples and then fed into the target network, and performance is reported. As shown in Table I, this simple defense method has a positive effect for both segmentation and motion tasks in most cases compared to depth and detection tasks. Furthermore, the effect of blurring on the network’s performance is inspected without applying any attacks, as shown in Table II. Both detection and distance tasks are affected the most. This explains why this defense method is more effective for segmentation and motion tasks. Figure 5 shows different visual samples of the attacks organized into four groups. Each group has three images: the original output, the adversarial perturbations magnified to 10X, and the impacted results are overplayed on the adversarial examples. As expected, perturbations for the white box attacks are much more minor and more localized than the black box case. Moreover, for the un-targeted attacks, the performance is harmed without having a specific goal leading to arbitrary predictions. On the other hand, for targeted attacks, vehicles or roads are removed for the semantic segmentation task. We add false objects or remove true objects for the detection task. Near pixels are converted as far for the distance task. Finally, we convert dynamic objects to static for the motion task.
|Task||[HTML]7d9ebf Metric||[HTML]7d9ebf Original||[HTML]e8715b Blurred||[HTML]e8715b Effect (%)|
In this work, various adversarial attacks are applied on a multi-task target network with shared encoder and different decoders for autonomous driving visual perception. For each perception task, white and black box attacks are conducted for targeted and un-targeted scenarios. Moreover, attacking curves show the interactions between the attacks on different tasks. It is shown how attacking a task has an effect not only on that task but also on the others. Moreover, by applying blurring on the adversarial examples as a defense method, it is found to have a positive effect on segmentation and motion tasks in contrast to object detection and distance tasks for the considered target network. In the future, we plan to conduct physical attacks, try other sensors such as LiDAR, and attacking multiple tasks jointly. It is obvious that, attacks and defenses are still challenging tasks and an active area of research, especially for autonomous driving applications with multi-task deep networks.
-  WHO, “Global status report on road safety 2018, ”World Health Organization,” https://apps.who.int/iris/bitstream/handle/10665/276462/9789241565684-eng.pdf, [Accessed October-2020].
-  V. R. Kumar, S. Yogamani, H. Rashed, G. Sitsu, C. Witt, I. Leang, S. Milz, and P. Mäder, “Omnidet: Surround view cameras based multi-task visual perception network for autonomous driving,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2830–2837, 2021.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in 3rd International Conference on Learning Representations, ICLR 2015, Proceedings, 2015.
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust
adversarial examples,” in
International conference on machine learning. PMLR, 2018, pp. 284–293.
-  S. Thys, W. Van Ranst, and T. Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” in , 2019, pp. 49–55.
-  C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1369–1378.
-  Y. Cao, C. Xiao, D. Yang, J. Fang, R. Yang, M. Liu, and B. Li, “Adversarial objects against lidar-based autonomous driving systems,” arXiv preprint arXiv:1907.05418, 2019.
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
-  G. K. Dziugaite, Z. Ghahramani, and D. M. Roy, “A study of the effect of jpg compression on adversarial images,” arXiv preprint arXiv:1608.00853, 2016.
-  W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” in 25th Annual Network and Distributed System Security Symposium. The Internet Society, 2018.
-  P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models,” in International Conference on Learning Representations, 2018.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661, 2014.
-  S. Chennupati, G. Sistu, S. Yogamani, and S. A Rawashdeh, “Multinet++: Multi-stream feature aggregation and geometric loss strategy for multi-task learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
-  G. Sistu, I. Leang, S. Chennupati, S. Yogamani, C. Hughes, S. Milz, and S. Rawashdeh, “Neurall: Towards a unified visual perception model for automated driving,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019, pp. 796–803.
-  C. Mao, A. Gupta, V. Nitin, B. Ray, S. Song, J. Yang, and C. Vondrick, “Multitask Learning Strengthens Adversarial Robustness,” in ECCV 2020. Springer International Publishing, 2020, pp. 158–174.
-  V. R. Kumar, S. Milz, C. Witt, M. Simon, K. Amende, J. Petzold, S. Yogamani, and T. Pech, “Monocular fisheye camera depth estimation using sparse lidar supervision,” in 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018.
-  S. Yogamani, C. Hughes, J. Horgan, G. Sistu, P. Varley, D. O’Dea, et al., “Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving,” in IEEE International Conference on Computer Vision, 2019, pp. 9308–9318.
C. Shu, K. Yu, Z. Duan, and K. Yang, “Feature-metric loss for self-supervised learning of depth and egomotion,” inEuropean Conference on Computer Vision. Springer, 2020, pp. 572–588.
-  M. Berman, A. R. Triki, and M. B. Blaschko, “The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
-  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
-  H. Rashed, E. Mohamed, G. Sistu, V. R. Kumar, C. Eising, A. El-Sallab, and S. Yogamani, “Generalized object detection on fisheye cameras for autonomous driving: Dataset, representations and baseline,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 2272–2280.