Affine Disentangled GAN for Interpretable and Robust AV Perception

07/06/2019
by   Letao Liu, et al.
6

Autonomous vehicles (AV) have progressed rapidly with the advancements in computer vision algorithms. The deep convolutional neural network as the main contributor to this advancement has boosted the classification accuracy dramatically. However, the discovery of adversarial examples reveals the generalization gap between dataset and the real world. Furthermore, affine transformations may also confuse computer vision based object detectors. The degradation of the perception system is undesirable for safety critical systems such as autonomous vehicles. In this paper, a deep learning system is proposed: Affine Disentangled GAN (ADIS-GAN), which is robust against affine transformations and adversarial attacks. It is demonstrated that conventional data augmentation for affine transformation and adversarial attacks are orthogonal, while ADIS-GAN can handle both attacks at the same time. Useful information such as image rotation angle and scaling factor are also generated in ADIS-GAN. On MNIST dataset, ADIS-GAN can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 5

08/06/2021

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

In recent years, many deep learning models have been adopted in autonomo...
09/13/2021

Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

Adversarial examples can deceive a deep neural network (DNN) by signific...
04/23/2019

Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping

Neural networks are now actively being used for computer vision tasks in...
02/27/2019

Disentangled Deep Autoencoding Regularization for Robust Image Classification

In spite of achieving revolutionary successes in machine learning, deep ...
07/10/2019

Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations

Adversarial attacks are small, carefully crafted perturbations, impercep...
07/14/2020

Towards robust sensing for Autonomous Vehicles: An adversarial perspective

Autonomous Vehicles rely on accurate and robust sensor observations for ...
09/24/2020

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

Multivariate time series (MTS) regression tasks are common in many real-...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction & Related Work

Autonomous vehicles (AV) have received much attention in recent years. One pillar of AV perception systems is the RGB data captured by the camera. Through the RGB data, the system can understand its surrounding environment, including the location of vehicles, pedestrians and other crucial information. The deep convolutional neural network (CNN) is a widely accepted cutting-edge computer vision algorithm [15] [9] [27]

to process the RGB data, for detecting objects, segmenting urban scenes, etc. Despite the tremendous success accomplished by deep CNN models, the adversarial examples show that there are still reliability and robustness issues (see Fig. 1). The adversarial images are visually indistinguishable for a human viewer, but state-of-the-art classifiers make wrong predictions with high confidence for those images. Since the first publication of adversarial attack

[28], numerous studies have appeared on this topic [8, 16, 25, 29, 20, 2, 21, 19, 1], which mainly consider pixel level perturbations. Such adversarial attacks on CNN raise concerns that the perception system of the autonomous vehicle can be maliciously hacked.

Fig. 1: An adversarial attack on a car image. Left: without adversarial attack, the classification result is “minivan”. Right: with FGSM adversarial attack, the classifier labels this image as “car wheel” instead of “minivan”. Testing algorithm: ResNet-50 [9].

In [5], it was shown that simple affine transformations (e.g., rotations) can cause deep CNNs to misclassify images. The images captured by the perception system could experience similar affine distortions during normal driving scenarios when vehicles are passing water puddles (see Fig. 2) or on rural roads. Both adversarial attack and affine distortions need to be addressed before integrating deep CNN vision systems into safety-related applications like autonomous vehicle at large scale.

Fig. 2: When the vehicle hits a water puddle, the images captured by the camera will be tilted. As a result, the RGB-based object detector may misclassify objects in scenes.

Fig. 3: Rotation of vehicle images. The vehicle tends to be detected incorrectly when the image is tilted. Testing algorithm: Inception v3 [27].

GAN [7]

has been widely studied and utilized since its invention. It is a generative model which captures high-dimensional data distribution through adversarial process. It can generate images that simulate the training images such as hand written digits, animals, vehicles, etc. Deep Convolutional GAN

[22] introduces convolution mechanism into the GAN structure by inserting the deconvolution layers in the generator network. Bi-directional GAN [4] further provides a pathway to convert data from image space back to latent space with additional encoder network. InfoGAN [3] utilizes a disentangled representation that separates features and noise in the latent space. The separated features can represent categorical and continuous attributes of the training images. In [18], the issue of inductive bias in disentangled representation is discussed. In [10], the concept of symmetry group concept is introduced to define disentanglement behaviour. DefenceGAN [24] uses GAN as a defence method against adversarial attacks.

Several studies address invariance property of deep CNN for affine transformations. In [13]

, anti-distortion classification result is achieved by inserting Spatial Transformer layer into the given network. However the affine parameters are not presented in a disentangled manner, which makes it less interpretable. Transforming Autoencoder

[11] uses auto-encoder to model 2D affine transformation applied to images. The trained generative model can learn to generate transformed images in a disentangled way. However it does not tackle the improvement on classification accuracy. In [6], transformations that preserve the object identity are analyzed in the symmetry group. In [14][26], filter banks are designed to make the classifier transformation invariant.

Our Contributions

  1. We introduce Affine Disentangled GAN (ADIS-GAN) which is robust against both affine transformation and adversarial attack. It achieves classification accuracies comparable to that of state-of-the-art supervised learning algorithm, although it is an unsupervised algorithm.

  2. We show that affine transformation augmented training and adversarial augmented training are orthogonal, which means they can only defend typical attack they have been trained on.

  3. Affine Disentangled GAN is more interpretable, providing information that helps to understand potential misclassifications. On MNIST dataset, it can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.

Ii Preliminary

Ii-a Generative Adversarial Network

GAN [7] is a generative model which captures high-dimensional data distribution through adversarial process: a mini-max game between the generator and discriminator. The generator tries to produce images that are similar to real ones, while the discriminator judges whether the images are generated or real. During the training process, the generator will create images that do not belong to the original dataset. Those images may prevent the model from overfitting, and the model is more likely to learn a smoother data distribution which involves the adversarial samples. The vanilla GAN formulation is:

(1)

A standard distribution in the latent space can be transferred to data space through generator . The discriminator judges whether the samples are from training dataset or generated dataset .

Bi-directional GAN [4] adds an encoder to the vanilla GAN, which makes the image to latent and latent to image transformation possible. The encoder and generator together can be treated as a filter where the reconstructed images may only keep meaningful information and discard noise such as adversarial perturbation. The Bi-directional GAN formulation is:

(2)

InfoGAN [3]

can assign the latent vectors semantic meanings such as categorical and continuous information (e.g. skew of an image) by maximizing the mutual information between generated latent space and reconstructed latent space. The InfoGAN formulation is:

(3)

Bi-directional Info GAN [12] uses encoder instead auxiliary to reconstruct the latent vectors. The Bi-directional InfoGAN formulation is :

(4)

Ii-B Affine Transformation Matrix

Inspired by [13]

, we utilize the affine matrix as a regularizer in our model. Conventional affine matrix is a 2 by 3 matrix, matrix defined as:

(5)

and represent the horizontal and vertical translation parameters respectively. These 2 parameters can be removed from the affine matrix without affecting other affine properties. The affine matrix becomes a 2 by 2 matrix after removing the translation parameters. It can be decomposed as rotation, skew, and zoom matrix respectively (see Appendix for an alternative formulation):

(6)

Iii System Description

Iii-a Affine Regularizer

Since the images captured by the camera usually will not be skewed during normal driving scenarios, we only focus on rotation and zoom attributes in this paper. If we discard skew matrix, the affine matrix equation can be simplified as follows:

(7)

where:

(8)
(9)
(10)
(11)

Assume each image is composed of an affine matrix and a base image . The input image from training dataset can be expressed as:

(12)

The scaled image transformed from with predefined affine matrix can be expressed as:

(13)

With the assumption, the scaled image can also be expressed as:

(14)

Through simple matrix manipulation we can obtain the affine regularizer:

(15)

Iii-B Model Architecture

The Affine Disentangled GAN (ADIS-GAN) maximizes the mutual information between generated affine matrix and reconstructed affine matrix with the assumption of affine regularizer (see Fig. 4). Three continuous latent vectors are assigned to , , and

respectively. Those continuous latent vectors are sampled from a random uniform distribution. They can be converted to

through (8) – (11). Similarly, training images and transformed images are encoded to continuous latent vectors through the encoder. They can be further converted to and through equations (8) – (11). Finally, the mutual information will be maximized between and

via (15). The updated loss function with affine regularizer is:

Fig. 4: Model architecture. Diamond boxes are affine regularizers derived in section A. Rectangle boxes are variables. Ellipse boxes are neural networks.
Block
Layer Encoder Generator Discriminator

Input
28x28 72 2x28, 72
NN
3x3x16 conv
LReLU,dropout
128x7x7 fc
ReLU
3x3x16 conv
LReLU, dropout, BN
NN
3x3x32 conv
LReLU,dropout
3x3x16 deconv
ReLU, BN
3x3x32 conv
LReLU, dropout, BN
NN
3x3x64 conv
LReLU,dropout
3x3x32 deconv
ReLU, BN
3x3x64 conv
LReLU,dropout, BN
NN
3x3x128 conv
LReLU, dropout
BN
3x3x64 deconv
ReLU, BN
3x3x128 conv
LReLU,dropout, BN
NN
1024 fc
LReLU
3x3x128 deconv
ReLU, BN
1024 fc
LReLU
NN -
3x3x1 conv
tanh
1024 fc
LReLU
NN - -
1 fc
sigmoid
Output
categorical: 10 fc
softmax
image: 28x28 real/fake: 1
Output continuous: 3 fc - -
Output
noise: 59 fc
tanh
- -
NN for neural network, fc for fully connected, conv for convolution,

deconv for deconvolution, BN for batch normalization

TABLE I: Layer Information

Iv Experimental Results

As a proof of concept experiment, we test our algorithm on the MNIST dataset [17]. In Section A, we consider experiments with rotated images. In Section B, we explore adversarial attacks. In Section C, we elaborate on the interpretability of the proposed algorithm.

Iv-a Classification Accuracy on Rotated Images

To test the robustness of model against the rotated images, we purposely rotate the images from -30 to +30 degrees as input images. Six models are tested, which are model trained with original dataset, model trained with rotation augmented dataset, model trained with FGSM adversarial sample augmented dataset, model trained with PGD adversarial sample augmented dataset, Bi-directional Info GAN trained with original dataset and the proposed ADIS-GAN trained with original dataset. Adding FGSM and PGD adversarial sample augmented model is to illustrate the robustness of adversarial training against rotated images. Adding Bi-directional Info GAN is to illustrate the robustness of generative model without affine inductive bias against rotated images.

Fig. 5: We purposely rotate the images from MNIST test dataset from -30 degrees to +30 degrees as the input images. Affine Disentangled GAN (ADIS-GAN) has achieved over 98 percent accuracy through all rotated degrees. It has less than 1 percent accuracy difference with the model trained on rotational augmented dataset.

As we can observe from Fig. 5, the model trained with clean dataset and adversarial augmented dataset suffer from rotation transformations. PGD Aug and FGSM Aug show that model trained with adversarial sample augmented dataset is not robust against rotation transformations. Bi-directional Info GAN shows that the generative model without affine inductive bias is not robust against rotation transformations. ADIS-GAN has achieved over 98 percent accuracy through all rotation degrees. It has less than 1 percent accuracy difference with the model trained with rotation augmented dataset. This demonstrates the effectiveness of ADIS-GAN against the rotation transformations.

Iv-B Classification Accuracy on Adversarial Images

To test the robustness of model against adversarial attacks, we create two kinds of adversarial samples with FoolBox [23].

Adversarial Attack
Model No Attack FGSM = 0.3 PGD = 0.3

Original
99.13 25.88 0.10
Rotation Aug
98.98 11.37 % 0
FGSM Aug
98.60 86.57 61.01
PGD Aug
98.93 91.55 85.88
ADIS-GAN
98.22 93.10 96.53
For PGD attack, the binary search is set to False for speed purpose.
The defence method for ADIS-GAN is similar to [24],
which acts as a filter before the given classifier.
TABLE II: Classification Accuracy against Adversarial Attack

From Table II we can observe that the model trained with clean dataset and rotation augmented dataset are vulnerable to adversarial attacks, which shows that affine transformation augmented data training is orthogonal to adversarial attack. PGD is a relatively stronger attack compared to FGSM, and has a higher attack success rate, which shows that adversarial sample augmented training has its limitation: a stronger attack can defeat a model trained with a weaker attack. ADIS-GAN has consistently good performance with those 2 attacks, which shows it may capture a smoother data distribution that involves larger adversarial manifolds.

Iv-C Interpretability

Affine Disentangled GAN (ADIS-GAN) can express the data distribution in a more interpretable way, which mitigates the black box problem of deep learning to some degree. In this section, mapping between rotation angle and latent vectors is shown to explain how the algorithm understands rotational knowledge. Generated images are shown to demonstrate the relationship between latent space and data space.

Fig. 6: The mapping between rotation angle and latent vector value. The latent vector value has a linear relationship with the rotation angle. The tiny disturbance around 0 degrees is due to the different writing style of hand-written digits.

As we can observe from Fig. 6, the latent vector values have a linear relationship with rotation angle. This explains why ADIS-GAN is robust against image rotation since it can interpret the rotation angles of the given image, which provides information that helps to understand potential misclassifications.

Fig. 7: Images change with rotation latent vectors.

Fig. 8: Images change with horizontal zoom latent vectors.

Fig. 7 and Fig. 8 show how the generated images change with latent vectors. These figures illustrate how the algorithm represents information.

V Conclusion and Future Work

Deep CNN based vision systems play a major role in the autonomous vehicle (AV) perception system. However, deep CNNs are not robust against affine transformations and adversarial attacks. The former could happen during normal driving scenarios when the vehicle is hitting water puddles or on rural roads, while the latter could happen when a malicious attack is implemented. It is necessary to overcome these challenges before integrating deep CNN based vision system to safety-related applications such autonomous vehicles (AV).

In this paper, we present the Affine Disentangled GAN (ADIS-GAN), which is robust to both rotation transformations and adversarial attacks. We also introduce the development of affine regularizer. We show that affine transformation augmented and adversarial augmented training is orthogonal, which means they can only defend typical attack they have been trained with.

Affine regularizer captures the symmetry transformations between latent space and image space during affine transformation. We believe there are many such kinds of symmetries in the physical world. By successfully mapping those symmetries, we can make the deep learning algorithm more robust and interpretable.

References

Appendix A Two Affine Transformation Orders

In principle, there are 6 sequences of affine transformations (R for rotation, K for skew, Z for zoom):

  • RKZ - 1,

  • RZK - 1,

  • KRZ - 2,

  • KZR - 2,

  • ZKR - 2,

  • ZRK - 1.

The zoom operation can be inserted arbitrarily in the sequence, since it is a commutative operation. On the other hand, rotation and skew are non-commutative, therefore, their order in the sequence is essential. We can categorize the sequences according to whether the rotation operator is applied before skew or vice versa, leading to two different categories.

The affine transformation RKZ, which is an example of the first category, can be written as:

(16)

The affine transformation KRZ, which is an example of the second category, can be written as:

(17)

The matrix elements are computed as follows (left: category 1, right: category 2):

(18)
(19)
(20)
(21)

We can observe that A12 and A21 are the same for both categories, while A11 and A22 are different.