I Introduction & Related Work
Autonomous vehicles (AV) have received much attention in recent years. One pillar of AV perception systems is the RGB data captured by the camera. Through the RGB data, the system can understand its surrounding environment, including the location of vehicles, pedestrians and other crucial information. The deep convolutional neural network (CNN) is a widely accepted cutting-edge computer vision algorithm   
to process the RGB data, for detecting objects, segmenting urban scenes, etc. Despite the tremendous success accomplished by deep CNN models, the adversarial examples show that there are still reliability and robustness issues (see Fig. 1). The adversarial images are visually indistinguishable for a human viewer, but state-of-the-art classifiers make wrong predictions with high confidence for those images. Since the first publication of adversarial attack, numerous studies have appeared on this topic [8, 16, 25, 29, 20, 2, 21, 19, 1], which mainly consider pixel level perturbations. Such adversarial attacks on CNN raise concerns that the perception system of the autonomous vehicle can be maliciously hacked.
In , it was shown that simple affine transformations (e.g., rotations) can cause deep CNNs to misclassify images. The images captured by the perception system could experience similar affine distortions during normal driving scenarios when vehicles are passing water puddles (see Fig. 2) or on rural roads. Both adversarial attack and affine distortions need to be addressed before integrating deep CNN vision systems into safety-related applications like autonomous vehicle at large scale.
has been widely studied and utilized since its invention. It is a generative model which captures high-dimensional data distribution through adversarial process. It can generate images that simulate the training images such as hand written digits, animals, vehicles, etc. Deep Convolutional GAN introduces convolution mechanism into the GAN structure by inserting the deconvolution layers in the generator network. Bi-directional GAN  further provides a pathway to convert data from image space back to latent space with additional encoder network. InfoGAN  utilizes a disentangled representation that separates features and noise in the latent space. The separated features can represent categorical and continuous attributes of the training images. In , the issue of inductive bias in disentangled representation is discussed. In , the concept of symmetry group concept is introduced to define disentanglement behaviour. DefenceGAN  uses GAN as a defence method against adversarial attacks.
Several studies address invariance property of deep CNN for affine transformations. In 
, anti-distortion classification result is achieved by inserting Spatial Transformer layer into the given network. However the affine parameters are not presented in a disentangled manner, which makes it less interpretable. Transforming Autoencoder uses auto-encoder to model 2D affine transformation applied to images. The trained generative model can learn to generate transformed images in a disentangled way. However it does not tackle the improvement on classification accuracy. In , transformations that preserve the object identity are analyzed in the symmetry group. In , filter banks are designed to make the classifier transformation invariant.
We introduce Affine Disentangled GAN (ADIS-GAN) which is robust against both affine transformation and adversarial attack. It achieves classification accuracies comparable to that of state-of-the-art supervised learning algorithm, although it is an unsupervised algorithm.
We show that affine transformation augmented training and adversarial augmented training are orthogonal, which means they can only defend typical attack they have been trained on.
Affine Disentangled GAN is more interpretable, providing information that helps to understand potential misclassifications. On MNIST dataset, it can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.
Ii-a Generative Adversarial Network
GAN  is a generative model which captures high-dimensional data distribution through adversarial process: a mini-max game between the generator and discriminator. The generator tries to produce images that are similar to real ones, while the discriminator judges whether the images are generated or real. During the training process, the generator will create images that do not belong to the original dataset. Those images may prevent the model from overfitting, and the model is more likely to learn a smoother data distribution which involves the adversarial samples. The vanilla GAN formulation is:
A standard distribution in the latent space can be transferred to data space through generator . The discriminator judges whether the samples are from training dataset or generated dataset .
Bi-directional GAN  adds an encoder to the vanilla GAN, which makes the image to latent and latent to image transformation possible. The encoder and generator together can be treated as a filter where the reconstructed images may only keep meaningful information and discard noise such as adversarial perturbation. The Bi-directional GAN formulation is:
can assign the latent vectors semantic meanings such as categorical and continuous information (e.g. skew of an image) by maximizing the mutual information between generated latent space and reconstructed latent space. The InfoGAN formulation is:
Bi-directional Info GAN  uses encoder instead auxiliary to reconstruct the latent vectors. The Bi-directional InfoGAN formulation is :
Ii-B Affine Transformation Matrix
Inspired by 
, we utilize the affine matrix as a regularizer in our model. Conventional affine matrix is a 2 by 3 matrix, matrix defined as:
and represent the horizontal and vertical translation parameters respectively. These 2 parameters can be removed from the affine matrix without affecting other affine properties. The affine matrix becomes a 2 by 2 matrix after removing the translation parameters. It can be decomposed as rotation, skew, and zoom matrix respectively (see Appendix for an alternative formulation):
Iii System Description
Iii-a Affine Regularizer
Since the images captured by the camera usually will not be skewed during normal driving scenarios, we only focus on rotation and zoom attributes in this paper. If we discard skew matrix, the affine matrix equation can be simplified as follows:
Assume each image is composed of an affine matrix and a base image . The input image from training dataset can be expressed as:
The scaled image transformed from with predefined affine matrix can be expressed as:
With the assumption, the scaled image can also be expressed as:
Through simple matrix manipulation we can obtain the affine regularizer:
Iii-B Model Architecture
The Affine Disentangled GAN (ADIS-GAN) maximizes the mutual information between generated affine matrix and reconstructed affine matrix with the assumption of affine regularizer (see Fig. 4). Three continuous latent vectors are assigned to , , and
respectively. Those continuous latent vectors are sampled from a random uniform distribution. They can be converted tothrough (8) – (11). Similarly, training images and transformed images are encoded to continuous latent vectors through the encoder. They can be further converted to and through equations (8) – (11). Finally, the mutual information will be maximized between and
via (15). The updated loss function with affine regularizer is:
|image: 28x28||real/fake: 1|
|Output||continuous: 3 fc||-||-|
Iv Experimental Results
As a proof of concept experiment, we test our algorithm on the MNIST dataset . In Section A, we consider experiments with rotated images. In Section B, we explore adversarial attacks. In Section C, we elaborate on the interpretability of the proposed algorithm.
Iv-a Classification Accuracy on Rotated Images
To test the robustness of model against the rotated images, we purposely rotate the images from -30 to +30 degrees as input images. Six models are tested, which are model trained with original dataset, model trained with rotation augmented dataset, model trained with FGSM adversarial sample augmented dataset, model trained with PGD adversarial sample augmented dataset, Bi-directional Info GAN trained with original dataset and the proposed ADIS-GAN trained with original dataset. Adding FGSM and PGD adversarial sample augmented model is to illustrate the robustness of adversarial training against rotated images. Adding Bi-directional Info GAN is to illustrate the robustness of generative model without affine inductive bias against rotated images.
As we can observe from Fig. 5, the model trained with clean dataset and adversarial augmented dataset suffer from rotation transformations. PGD Aug and FGSM Aug show that model trained with adversarial sample augmented dataset is not robust against rotation transformations. Bi-directional Info GAN shows that the generative model without affine inductive bias is not robust against rotation transformations. ADIS-GAN has achieved over 98 percent accuracy through all rotation degrees. It has less than 1 percent accuracy difference with the model trained with rotation augmented dataset. This demonstrates the effectiveness of ADIS-GAN against the rotation transformations.
Iv-B Classification Accuracy on Adversarial Images
To test the robustness of model against adversarial attacks, we create two kinds of adversarial samples with FoolBox .
|Model||No Attack||FGSM = 0.3||PGD = 0.3|
From Table II we can observe that the model trained with clean dataset and rotation augmented dataset are vulnerable to adversarial attacks, which shows that affine transformation augmented data training is orthogonal to adversarial attack. PGD is a relatively stronger attack compared to FGSM, and has a higher attack success rate, which shows that adversarial sample augmented training has its limitation: a stronger attack can defeat a model trained with a weaker attack. ADIS-GAN has consistently good performance with those 2 attacks, which shows it may capture a smoother data distribution that involves larger adversarial manifolds.
Affine Disentangled GAN (ADIS-GAN) can express the data distribution in a more interpretable way, which mitigates the black box problem of deep learning to some degree. In this section, mapping between rotation angle and latent vectors is shown to explain how the algorithm understands rotational knowledge. Generated images are shown to demonstrate the relationship between latent space and data space.
As we can observe from Fig. 6, the latent vector values have a linear relationship with rotation angle. This explains why ADIS-GAN is robust against image rotation since it can interpret the rotation angles of the given image, which provides information that helps to understand potential misclassifications.
Fig. 7 and Fig. 8 show how the generated images change with latent vectors. These figures illustrate how the algorithm represents information.
V Conclusion and Future Work
Deep CNN based vision systems play a major role in the autonomous vehicle (AV) perception system. However, deep CNNs are not robust against affine transformations and adversarial attacks. The former could happen during normal driving scenarios when the vehicle is hitting water puddles or on rural roads, while the latter could happen when a malicious attack is implemented. It is necessary to overcome these challenges before integrating deep CNN based vision system to safety-related applications such autonomous vehicles (AV).
In this paper, we present the Affine Disentangled GAN (ADIS-GAN), which is robust to both rotation transformations and adversarial attacks. We also introduce the development of affine regularizer. We show that affine transformation augmented and adversarial augmented training is orthogonal, which means they can only defend typical attack they have been trained with.
Affine regularizer captures the symmetry transformations between latent space and image space during affine transformation. We believe there are many such kinds of symmetries in the physical world. By successfully mapping those symmetries, we can make the deep learning algorithm more robust and interpretable.
-  Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. CoRR, abs/1707.07397, 2017.
-  Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. CoRR, abs/1608.04644, 2016.
-  Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016.
-  Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. CoRR, abs/1605.09782, 2016.
-  Logan Engstrom, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. A rotation and a translation suffice: Fooling cnns with simple transformations. CoRR, abs/1712.02779, 2017.
-  Robert Gens and Pedro M Domingos. Deep symmetry networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2537–2545. Curran Associates, Inc., 2014.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
-  Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
-  Irina Higgins, David Amos, David Pfau, Sébastien Racanière, Loïc Matthey, Danilo J. Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. CoRR, abs/1812.02230, 2018.
-  Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. Transforming auto-encoders. In Proceedings of the 21th International Conference on Artificial Neural Networks - Volume Part I, ICANN’11, pages 44–51, Berlin, Heidelberg, 2011. Springer-Verlag.
-  Tobias Hinz and Stefan Wermter. Inferencing based on unsupervised learning of disentangled representations. CoRR, abs/1803.02627, 2018.
-  Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015.
-  Angjoo Kanazawa, Abhishek Sharma, and David W. Jacobs. Locally scale-invariant convolutional neural networks. CoRR, abs/1412.5104, 2014.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012. Curran Associates Inc.
-  Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.
-  Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010.
-  Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. CoRR, abs/1811.12359, 2018.
-  Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. CoRR, abs/1706.06083, 2017.
-  Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. CoRR, abs/1511.04599, 2015.
-  Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against deep learning systems using adversarial examples. CoRR, abs/1602.02697, 2016.
-  Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
-  Jonas Rauber, Wieland Brendel, and Matthias Bethge. Foolbox v0.8.0: A python toolbox to benchmark the robustness of machine learning models. CoRR, abs/1707.04131, 2017.
-  Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. CoRR, abs/1805.06605, 2018.
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition.In ACM Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016.
Kihyuk Sohn and Honglak Lee.
Learning invariant representations with local transformations.
Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pages 1339–1346, USA, 2012. Omnipress.
-  Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013.
-  Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick D. McDaniel. Ensemble adversarial training: Attacks and defenses. CoRR, abs/1705.07204, 2017.
Appendix A Two Affine Transformation Orders
In principle, there are 6 sequences of affine transformations (R for rotation, K for skew, Z for zoom):
RKZ - 1,
RZK - 1,
KRZ - 2,
KZR - 2,
ZKR - 2,
ZRK - 1.
The zoom operation can be inserted arbitrarily in the sequence, since it is a commutative operation. On the other hand, rotation and skew are non-commutative, therefore, their order in the sequence is essential. We can categorize the sequences according to whether the rotation operator is applied before skew or vice versa, leading to two different categories.
The affine transformation RKZ, which is an example of the first category, can be written as:
The affine transformation KRZ, which is an example of the second category, can be written as:
The matrix elements are computed as follows (left: category 1, right: category 2):
We can observe that A12 and A21 are the same for both categories, while A11 and A22 are different.