Counterstrike: Defending Deep Learning Architectures Against Adversarial Samples by Langevin Dynamics with Supervised Denoising Autoencoder

05/30/2018 ∙ by Vignesh Srinivasan, et al. ∙ Berlin Institute of Technology (Technische Universität Berlin) Fraunhofer 2

Adversarial attacks on deep learning models have been demonstrated to be imperceptible to a human, while decreasing the model performance considerably. Attempts to provide invariance against such attacks have denoised adversarial samples to only send cleaned samples to the classifier. In a similar spirit this paper proposes a novel effective strategy that allows to relax adversarial samples onto the underlying manifold of the (unknown) target class distribution. Specifically, given an off-manifold adversarial example, our Metroplis-adjusted Langevin algorithm (Mala) guided through a supervised denoising autoencoder network (sDAE) allows to drive the adversarial samples towards high density regions of the data generating distribution. So, in a nutshell the adversarial example is transformed back from off-manifold onto the data manifold for which the learning model was originally trained and where it can perform well and robustly. Experiments on various benchmark datasets show that our novel Malade method exhibits a high robustness against blackbox and whitebox attacks and outperforms state-of-the-art defense algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 14

page 15

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs)

[1, 2, 3, 4, 5] are known to be susceptible to adversarial attacks, i.e. examples crafted intentionally by adding slight noise to the input [6, 7, 8, 9]. The interesting aspect hereby is that such adversarial noise can not be recognized by humans, but will considerably decrease the accuracy of a well trained DNN [10, 11].

Adversarial attacks can be broadly classified into blackbox and whitebox attacks [7, 12, 13, 14]. In the blackbox attacks, the attacker is assumed to have access to the classifier outputs only. Here, the attacker can first train a substitute network from the data and classifier outputs, and then modify the original data samples by gradient descent in order to maximize the classifier output of the substitute network for a wrong label. In whitebox attacks, the attacker is assumed to have the full knowledge on the target classification system, including the architecture and the weights of the DNNs and the respective defense strategy. In this scenario, attackers can omit the training procedure of the substitute network, and directly craft the adversarial noise. For defense, a number of strategies have been proposed, i.e. (i) incorporation of adversarial examples in the training phase [15, 16, 17, 18, 19], (ii) preprocessing such as compression and decompression that help destroying the elaborate spatial coherence hidden in the adversarial samples [20, 21, 22]

and (iii) projection of the adverserial examples onto the estimated data manifold

[23, 24, 19].

In this paper, we propose a novel defense strategy: adversarial examples are relaxed towards the estimation of the high density area of the data manifold. Figure 1 gives an overview of the approach. The adversarial sample (red circle) is created by moving the original sample (black circle) away from the data manifold for which the learning machine to be attacked was trained. It is assumed that this sample is placed into a low density area of the training data, where the DNN is not well trained, but still close to the original high density area. Under this assumption, a counterstrike to the adversarial attack is to relax the sample from the low density area (high energy state) to the closest high density area (low energy state) as this makes the classifier more confident and helps to remove the adversarial noise. To efficiently relax adversarial samples, we use the Metropolis-adjusted Langevin algorithm (Mala) [25, 26]

, an efficient Markov chain Monte Carlo (MCMC) sampling method. Mala requires the gradient of the energy function, which corresponds to the gradient of the (negative) log probability or the score function. An option for estimating this gradient

of the log probability of the input is to use a denoising autoencoder (DAE) [27, 28]. Just naively applying Mala with DAE would, however, have an apparent drawback: if there would exist high density regions (clusters) where the labels of samples are mixed, then Mala with DAE could drive the input sample to the area with a wrong label (see green line in Fig. 1), which degrades the classification accuracy. To overcome this drawback, we propose to replace DAE with its supervised variant, called supervised DAE (sDAE). sDAE can be trained by minimizing the sum of the reconstruction loss and the cross entropy loss of the classifier outputs. Mala with sDAE, which we call Malade, applied on the input/feature space of the classifier thus drives the adverserial samples towards high density regions of the data generating distributions (see blue line in Fig. 1), where the classifier is well trained to predict the correct label. Figure 2 compares Mala and Malade on a real image from the Mnist dataset. Since Malade uses prior knowledge on the target labels, it provides a much better projection to the manifold than Mala (with unsupervised DAE).

Conceptually, our novel strategy is inspired by a method where the adversarial samples are projected to the closest point in the data manifold [23, 24, 18, 19] through an optimization procedure. Clearly, our method does not require optimization, and therefore is much faster than existing state-of-the-art methods. We demonstrate the high robustness of Malade against blackbox and whitebox attacks on various benchmark datasets and find better or comparable performance to the state-of-the-art defense methods at significantly less computational costs.

Figure 1: An adversarial sample (red circle) is created by moving the data point (black circle) away from the data manifold, here the manifold of images of digit "9". In this low density area, the DNN is not well trained and thus misclassifies the adversarial sample. Unsupervised sampling techniques such as Mala (green line) project the data point back to high density areas, however, not necessarily to the manifold of the original class. The proposed Malade algorithm (blue line) takes into account class information through the sDAE and thus projects the adversarial sample back to the manifold of images of digit "9".
Figure 2: Top-left image is the original image while the next image is the adversarial image crafted for it. Top-row shows the steps N=1,10,25 (25 steps is considered here for visualization purpose) of Malade and the bottom-row shows the sampling using Mala. Since Malade has prior knowledge of the target labels, the gradient flow drives it towards the right class. Although Mala provides good defense, the flow of the gradients can lead it to a cluster of neighboring classes. Further images can be found in Appendix C

2 Attack and Defense Strategies

2.1 Attacking Strategies

We introduce the three most popular attacking strategies for whitebox attacks. For blackbox attacks, the same strategies can be applied after a substitute network is trained to mimic the classifier outputs.

Fast gradient sign method

Fast gradient sign method (FGSM) [6] is one of the simplest and fastest attack algorithms. Given an original sample and its corresponding label in the 1-of- expression, FGSM performs the gradient descent to move away from the true label:

(1)

where is the cross entropy loss of the classifier output for the true label , and is the step size controlling the distance from the original sample. Eq.(1) corresponds to the untargeted attacks, where the goal of the attacker is to make the classifier give a wrong label. By replacing the second term with , Eq.(1) gives the targeted attacks, where the attacker tries to make the classifier give a specific target label . The gradient step of FGSM can be applied iteratively, which naturally strengthens the attacks, while the adversarial noise gets easier for a human to detect.

R+fgsm

In R+FGSM, the gradient descent step (1) of FGSM is applied after a random perturbation [15], i.e.,

(2)

The random noise added before applying FGSM helps to escape the non-smooth vicinity of the data point. The noise strength is controlled by the parameter .

Carlini-Wagner

The Carlini-Wagner attack (CW) optimizes the noise by solving

(3)

where denotes the -norm, is an objective function that causes the sample to be misclassified, and is a trade-off parameter balancing the distance from the original sample and the strength of the attack. This method provides a crafted adversarial examples with minimum distance from the original sample, while making the classification accuracy low.

2.2 Defense Strategies

Adversarial training

In this strategy, adversarial samples are generated by a few known attacking strategies, and added to the training data, to make the classifier robust against those attacks [7, 18, 29, 19]. A drawback is naturally that one cannot foresee and provide adversarial examples of all kinds of attacks before training the classifier, which leaves the classifier open to new/unknown attacking strategies. A unique strategy is to distill the knowledge from a classifier to a student network [16]. It was reported that the student network trained on the classifier outputs (probabilities) tends to generalize well the samples outside but close to the data manifold, and therefore be more robust against the adversarial attacks than the original classifier.

Input preprocessing methods

A simple but effective method to defend against adversarial noise is to pre-process the input. It was reported that image transformations (e.g., bit depth reduction, JPEG compression and decompression) can destroy the elaborate spatial coherence hidden in the adversarial noise [20]. Denoising autoencoders (DAE) have been used for the same purpose [21, 22]. [18] pointed out that a DAE trained with the reconstruction error along with the classification error can be considered as a stack of networks, and adversarial examples crafted for such networks tend to contain less adversarial noise.

Generative methods

Fortified networks [19] reconstruct the feature space using a DAE. While the DAE is effectively used to model the data distribution, this method is orthogonal to our method proposed here. The authors considered the DAE to be part of the classifier model and the DAE is applied on the feature space. This combination of classifier and DAE parameters are trained jointly using classification loss, mean squared error of the DAE’s reconstruction and adversarial loss in addition. [24] proposed an untrained generative network as a deep image prior to reconstruct the image, thereby removing the adversarial noise. A generator pre-trained in an adversarial fashion is used to reconstruct the image in [23, 24]. At test time, the generator is optimized with a reconstruction error. This corresponds to searching the latent space for an image close to the manifold of images learned.

3 Sampling and Denoising

Metropolis-adjusted Langevin Algorithm

Metropolis-adjusted Langevin algorithm (Mala) is an efficient Markov chain Monte Carlo (MCMC) sampling method which uses the gradient of the energy (negative log-probability ). Sampling is performed sequentially by

(4)

where is the step size, and is random perturbation subject to . By appropriately controlling the step size

and the noise variance

, the sequence is known to converge to the distribution .111 For convergence, a rejection step after Eq.(4) is required. However, it was observed that a variant, called Mala-approx [30], without the rejection step gives reasonable sequence for moderate step sizes. We use Mala-approx in our proposed method. [30] successfully generate realistic artificial images that follow the natural image distribution with the gradient estimated by denoising autoencoders.

Denoising Autoencoders

A denoising autoencoders (DAE) [27, 28] is trained so that data samples contaminated with artificial noise is cleaned. More specifically, it minimizes the reconstruction error:

(5)

where denotes the expectation over the distribution , is a training sample subject to a distribution , and is artificial Gaussian noise with mean zero and variance . denotes an empirical (training) distribution of the distribution , namely, where are the training samples. [31]

discussed relation between DAEs and contractive autoencoders (CAEs), and proved the following useful property of DAEs:

[31] Under the assumption that 222 This assumption is not essential as we show in the proof in Appendix A. , the minimizer of the DAE objective (5) satisfies

(6)

as . Proposition 3 states that a DAE trained with a small can be used to estimate the gradient of the log probability. In a blog [32], it was proved that the residual is proportional to the score function of the noisy input distribution for any , i.e.,

(7)

4 Proposed Method

In this section, we propose our novel strategy for defense against adversarial attacks. We first introduce a supervised variant of DAE, and then propose our defense strategy.

4.1 Supervised Denoising Autoencoders (sDAE)

We propose a supervised variant of DAE, called the supervised denoising autoencoder (sDAE), which is trained by minimizing the following functional with respect to the function :

(8)

The difference from the DAE objective (5) is in the second term, which is proportional to the cross entropy loss. With this additional term, sDAE provides the gradient estimator of the log-joint-probability averaged over the training (conditional) distribution. Assume that the classifier output accurately reflects the conditional probability of the training data, i.e., , then the minimizer of the sDAE objective (8), satisfies

(9)

(Sketch of proof) Similarly to the analysis in [31], we first Taylor expand around , and write the sDAE objective similar to the CAE objective (The objective contains a higher order term than in [31] since we do not assume that ). After that, applying the second order Euler-Lagrange equation gives Eq.(9) as a stationary condition. The complete proof is given in Appendix A.

Since , if the label distribution is flat (or equivalently the number of training samples for all classes is the same), i.e., , the residual of sDAE gives

The first term is the gradient of the log-conditional-distribution on the label, where the label is estimated from the prior knowledge (the expectation is taken over the training distribution of the label, given ). If the number of training samples are non-uniform over the classes, the weight (or the step size) should be adjusted so that all classes contribute equally to the DAE training.

4.2 Mala with sDAE (Malade)

By using the sDAE, we perform Mala on the joint distribution, to relax the input samples:

(10)

Malade generates samples at every step using the score function provided by sDAE.

is the step size which describes the stride to be taken at every step and

is the noise term.

While [19] use DAE to denoise the features using the behavior of [31], the DAE and classifier models are not independently trained. Malade on the other hand can be trained under supervision from any pre-trained classifier model and it would learn the clustering accordingly. Defense-Gan [23] and Invert-and-Classify [24] perform steps of optimization to reconstruct the input, more number of steps can prove detrimental as the generator can reconstruct the adversarial example as well. Malade is based on sampling using Langevin dynamics. The gradient flow driving the samples become close to zero as they approach the data manifold effectively assuring data fidelity. MagNet [22] uses autoencoders to reconstruct the image as a preprocessing step. Hence, it can be considered as a special case of Mala sampling (guided by unsupervised DAE) with number of steps equals and step size equals .

5 Experiments

In this section, we report on the empirical performance of the proposed method in comparison with state-of-the-art baselines. We conducted experiment on the following datasets:

  • Mnist dataset [2] consists of handwritten digits from -. The dataset is split into training, validation and test set with , and respectively.

  • Cifar-10 dataset [33] consists of images with training images and test images per class with different classes in total. Each image is in dimension.

5.1 Blackbox Attacks

We first evaluate the robustness against blackbox attacks. We trained classifier models with different architectures on the Mnist dataset. The list of models and their architectures can be found in Appendix B. Table 1 summarizes the results. As in previous work on simulating blackbox attacks [7], we allow the attacker to train a substitute model to imitate the classifier’s output with samples kept aside from the test set. The adversarial examples crafted by the substitute is then used to attack the classifier. In the blackbox setting, the defense is also considered to be part of the blackbox. So, in effect, the substitute is trained from the output of the defense and the classifier.

For FGSM attack, with , the attacker is successful in bringing down the accuracy of the classifier. For a classifier model D and an attacker with architecture of A, the accuracy of the classifier on the test samples goes to . For Malade, sDAE was trained for each classifier model under supervision. On the other hand, for Mala, one unsupervised DAE was trained and applied to all the classifiers. For a given classifier, the number of steps to be taken and the step sizes are fixed for defense. The selection of step size is crucial for defense and will be discussed in Section 5.3

We present the results of Mala and Malade and compare them with two state-of-the-art defenses - Defense-Gan [23] and MagNet [22] - in Table 1. We reproduced results by Defense-Gan by using the publicly available code.333 https://github.com/kabkabm/defensegan For Magnet, we cleaned samples by the same unsupervised DAE used for Mala. In addition to our experiments on Mnist dataset, we evaluate the performance of Malade on Cifar-10 and report the results in Table 4. In Appendix D.1 and D.2, success and failure cases of Malade are displayed using images from the Cifar-10 dataset.

Classifier Accuracy No Defense Mala Malade Defense-Gan MagNet
A/B 99.17 61.52 93.66 95.27 85.56 94.75
A/C 99.17 54.38 90.55 94.11 86.43 97.63
B/A 99.26 68.77 93.15 95.21 89.72 93.77
B/D 99.26 45.04 92.23 94.16 88.75 97.93
C/B 99.27 61.54 93.81 95.75 86.32 94.41
C/D 99.27 59.30 95.64 97.18 89.62 96.72
D/A 98.34 27.74 89.78 91.32 84.21 90.31
D/D 98.34 25.23 86.74 92.82 87.83 92.59
Table 1: Classification accuracy on Mnist dataset for blackbox attacks using FGSM with an of . Malade represents sampling on the image space of the classifier. Mala and Malade sample iteratively for N=10 steps for all the experimental results listed below.

5.2 Whitebox Attacks

In whitebox settings, the attacker is assumed to have knowledge of the classifier and the defense in addition to the model parameters. We evaluate Mala and Malade on FGSM, R+FGSM and CW attacks. While the perturbations caused by the FGSM and R+FGSM attacks visually affect the image, the adversarial examples crafted by the CW attack produces images that are visually as good as real images.

Table 2 provides our results for the whitebox settings along with the baseline methods. MagNet, which performed well for blackbox settings suffers severely on whitebox settings. Defense-Gan on the other hand, performs very well against the whitebox attacks due to the optimization procedure for each input.

Since there is randomness at each step of the sampling due to in Eqn. 10, Malade is robust on the whitebox setting as well. Although Malade in whitebox attacks performs slightly worse than Defense-Gan, it is worth noting that Defense-Gan performs 200 steps of optimization and 10 different random initial seed to get these results. The score function for Malade is obtained from a pretrained sDAE and hence is computationally much less expensive. To compute the results from all our experiments, Malade required only 10 steps. Computation time is compared in Section 5.6.

More importantly, for real world systems, the attacker is at the liberty to choose the attacking strategy. Hence, it is vital that a defense mechanism be robust to both blackbox and whitebox attacks. Assuming that the attacker can choose the best strategies (taking the minimum accuracy over the blackbox and the whitebox strategies), our proposed Malade outperforms both baseline methods.

Attack Classifier Accuracy No Defense Mala Malade Defense-Gan MagNet
A 99.17 18.36 81.77 86.16 97.03 82.17
FGSM B 99.26 06.07 86.96 95.50 97.14 85.35
C 99.21 09.54 81.71 96.86 97.07 79.01
D 98.34 22.24 81.45 94.74 96.43 78.83
R+ A 99.17 12.90 85.09 88.30 97.18 86.19
FGSM B 99.26 06.03 88.59 96.27 97.29 88.50
, C 99.21 05.14 84.22 97.26 97.32 82.66
D 98.34 21.91 83.57 95.50 94.98 81.69
A 99.17 00.00 67.57 71.60 98.90 00.00
CW B 99.26 00.00 67.54 91.86 91.60 00.00
-norm C 99.21 28.04 69.49 96.15 98.90 00.00
D 98.34 00.00 67.53 90.25 98.30 00.03
Table 2: Classification accuracy on Mnist dataset for whitebox attacks using FGSM, R+FGSM and CW methods.
Accuracy No Defense Malade
0.10 99.26 98.94 98.98
0.15 99.26 97.97 98.71
0.20 99.26 93.90 98.59
0.25 99.26 82.42 97.95
0.30 99.26 61.54 97.18
0.35 99.26 42.45 94.61
0.40 99.26 30.33 86.16
Table 3: For FGSM attacks with blackbox settings, the is varied and classification accuracy on Mnist dataset is reported for Malade on classifier model C and substitute model D. The step sizes for the defense is fixed for the experiments below.
Attack Method Accuracy No Defense Mala Malade
Blackbox FGSM 83.89 54.32 66.47 68.21
Table 4: Classification accuracy on Cifar-10 dataset for blackbox FGSM attack with .

5.3 Selection of step size

The score function provided by Malade drives the generated sample towards high density regions in the data generating distributions. With the direction provided by the score function, controls the distance to move. With large , there is possibility of jumping out of the data manifold. Empirically, we found that annealing and with an off-set provided best results.

5.4 Effect of for FGSM attacks

Table 3 provides further results on the blackbox FGSM attacks for classifier model C with varied from to . An attacker is at the disposal of several strategies to weaken the classifier. A good defense should be robust to different attacks as well as different parameters of the attacks. While Malade is robust to varying the in FGSM attacks, higher values of can also destroy the image visually perceptible to a human eye.

5.5 Effect of the noise while training sDAE

The score function provided by DAE is dependent on the noise added to the input while training the DAE. While too small values for make the score function highly unstable, too large values blur the score. The same is true for the score function provided by Malade. Here in all our experiments, we trained the DAE as well as the sDAE with . Such a large noise is beneficial for reliable estimation of the score function. We report in Table 5 our findings on the effect of classification accuracy under a blackbox FGSM attack by varying the amount of noise on which the sDAE was trained on.

5.6 Time complexity

In Table 6, we report the time taken by Malade as a defense. It includes computing the score function and generating a sample after N steps. As the number of steps increases (N), the computational demand is minimal (measured as the elapsed time in seconds). A single NVIDIA GeForce GTX TITAN V GPU was used to perform this analysis. For Defense-Gan, we show the results corresponding to a single random start (referred to as R=1 in the authors’ work). Note that Defense-GAN requires 10 random starts (and hence 10 times more computation) to achieve the accuracy reported in Tables 1 and 2.

0.01 0.1 0.2 0.3 0.4
87.08 91.98 96.91 97.18 96.30
Table 5: The sDAE is trained with varying . Classification accuracy on Mnist dataset for blackbox FGSM attack with is reported.
N Malade Defense-Gan
10 0.008 0.003 0.043 0.027
25 0.016 0.007 0.070 0.003
50 0.028 0.013 0.137 0.004
Table 6: Time, in seconds, required to compute a reconstructed image for MNIST dataset.

6 Conclusion

Adversarial attacks to deep learning models essentially change a sample such that human perception does not allow to detect change. However, the classifier is compromised and yields a false prediction. Common practice for defense uses a denoising step [22, 20, 23, 24] to alleviate this effect. In this work we have proposed to use the Metroplis-adjusted Langevin algorithm which is guided through a supervised DAE - Malade. This framework allows to drive the adversarial samples towards the underlying data manifold and thus towards the high density regions of the data generating distribution which were originally used for training the nonlinear learning machine. This two-step procedure (1) relaxing and (2) classification give rise to a high generalization performance that significantly reduces the effect of adversarial attacks. We empirically show that Malade is robust against different attacks on blackbox and whitebox settings—Malade outperforms the state-of-the-art method (Defense-Gan), assuming that the attacker is at the disposal of several strategies to weaken the classifier. In addition, Malade is more computationally efficient than the other methods. Future work includes fine tuning of our strategy, e.g., majority vote of the classifier outputs from a collection of the generated samples after burn-in, analyzing the attacks and defenses using interpretation methods [34], and applying the supervised DAE to other applications such as federated or distributed learning [35, 36].

Acknowledgments

This work was supported by the Fraunhofer Society under the MPI-FhG collaboration project “Theory & Practice for Reduced Learning Machines”. This work was also supported by the German Research Foundation (GRK 1589/1) by the Federal Ministry of Education and Research (BMBF) under the project Berlin Big Data Center (FKZ 01IS14013A).

References

Appendix A Proof of Theorem 4.1

sDAE is trained so that the following functional is minimized with respect to the function :

(11)

which is a finite sample approximation to the true objective

(12)

We assume that and are analytic functions with respect to . For small , the Taylor expansion of the -th component of around gives

where is the Hessian of a function . Substituting this into Eq.(12), we have

(13)

Thus, the objective functional (12) can be written as

(14)

where

(15)

We can find the optimal function minimizing the functional (14) by using calculus of variations. The optimal function satisfies the following Euler-Lagrange equation: for each ,

(16)

where is the gradient (of with respect to ) and is the Hessian.

We have

and therefore

where is the Kronecker delta. Substituting the above into Eq.(16), we have

(17)

and therefore

Since

we have

and therefore

Taking the asymptotic term in Eq.(14) into account, we have

which implies that . Thus, we conclude that

(18)

which completes the proof of Theorem 4.1.

Appendix B Model Architecture

In this appendix, we summarize the architectures of the deep neural networks we used in all experiments. Appendix B.1 gives the architectures of the classifier models while Appendix B.2 gives the architecture of the DAE and sDAE models (both have the same architecture).

Conv represents convolution, with the format of Conv(number of output filter maps, kernel size, stride size). Linear

represents a fully connected layer with the format of Linear(number of output neurons).

Relu

is a rectified linear unit while

Tanh is a hyberbolic tangent function. Softmax

is a logistic function which squashes the input tensor to real values of range [0,1] and adds up to 1.

Conv_Transpose is transpose of the convolution operation, sometimes called deconvolution, with the format of Conv_Transpose(number of output filter maps, kernel size, stride size).

b.1 Classifier Architecture

A B C D
Conv() Conv() Conv() Linear()
Relu() Relu() Relu() Relu()
Conv() Conv() Conv() Linear()
Relu() Relu() Relu() Relu()
Linear() Conv() Linear() Linear()
Relu() Relu() Relu() Softmax()
Linear() Linear() Linear()
Softmax() Softmax() Softmax()
Table 7: Architectures of Classifiers A–D.

b.2 DAE (sDAE) Architecture

DAE, sDAE
Conv()
Encoder Tanh()
Conv()
Tanh()
Conv_Transpose()
Tanh()
Decoder Conv_Transpose()
Tanh()
Conv()
Tanh()
Table 8: Architecture of DAE and sDAE.

Appendix C Additional Results on Mnist

Figure 3: Successfully defended images for the Malade algorithm are shown here. The top row shows the steps taken by Malade, while the bottom row represents the steps taken by Mala. The classifier output (prediction probability) for the correct label and that for the wrong label (with the highest probability) are shown below each image.

Appendix D Additional Results on Cifar-10

d.1 Succesfully defended images by Malade

Figure 4: The images are ordered form left to right as the original image, adversarially perturbed image and the defended image. Given below each image are the predicted class label and its corresponding probability of prediction for the respective class

d.2 Failure cases of Malade

Figure 5: The images shown here are failure cases of Malade