VerifiablyRobustNN
None
view repo
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines. We further propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors than IBP. We also design a new activation function, parameterized ramp function (ParamRamp), which has more diversity of neuron status than ReLU. We conduct extensive experiments on MNIST, CIFAR10 and TinyImageNet with ParamRamp activation and achieve stateoftheart verified robustness. Code and the appendix are available at https://github.com/ZhaoyangLyu/VerifiablyRobustNN.
READ FULL TEXT VIEW PDFNone
Deep neural networks achieve stateoftheart performance in many tasks, , image classification, object detection, and instance segmentation, but they are vulnerable to adversarial attacks. A small perturbation that is imperceptible to humans can mislead a neural network’s prediction [szegedy2013intriguing, carlini2017towards, athalye2018obfuscated, kurakin2016adversarial, carlini2017adversarial]. To mitigate this problem, Madry [madry2018towards] develop an effective framework to train robust neural networks. They formulate adversarial training as a robust optimization problem. Specifically, they use projected gradient descent (PGD) to find the worstcase adversarial example near the original image and then minimize the loss at this point during training. Networks trained under this framework achieve stateoftheart robustness under many attacks [zhang2019theoretically, Wang2020Improving, rice2020overfitting]. However, these networks are only emperically robust, but not verifiably robust. They become vulnerable when stronger attacks are presented [wang2018mixtrain, croce2020reliable, tjeng2019evaluating].
This leads to the development of robustness verification, which aims to provide a certificate that a neural network gives consistent predictions for all inputs in some set, usually an
ball around a clean image. The key of robustness verification is to compute the lower and upper bounds of the output logits when input can take any value in the
ball. The exact bounds can be computed through Satisfiability Modulo Theory [katz2017reluplex]or solving a Mixed Integer Linear Programming (MILP) problem
[tjeng2019evaluating, cheng2017maximum]. Relaxed bounds can be obtained by reduce the bound computation problem to a linear programming (LP) problem [wong2018provable] or a semidefinite programming (SDP) problem [dathathri2020enabling]. However, these programming based methods are expensive and difficult to scale to large networks. To this end, another approach that makes linear relaxations of the nonlinear activation functions in a network is proposed [singh2018fast, singh2019abstract, wang2018efficient, weng2018towards, zhang2018crown, ko2019popqorn]. Figure 1 illustrates different strategies to make linear relaxations of a ReLU neuron. These methods can compute bounds analytically and efficiently. In this paper, we focus on the study of CROWN [zhang2018crown], which can compute relatively tight bounds while being fast. Other similar approaches [singh2018fast, singh2019abstract, wang2018efficient, weng2018towards] are either a special case of CROWN or a different view of it as demonstrated by Salman [salman2019convex].Wong [wong2018provable]
propose to incorporate bounds computed by the aforementioned linear relaxation based methods in the loss function to train verifiably robust networks. Similar approaches are proposed in several other works
[mirman2018differentiable, dvijotham2018training, raghunathan2018certified, wang2018mixtrain]. However, these methods generally bring heavy computational overhead to the original training process. Gowal [gowal2019scalable] propose to use a simple technique, interval bound propagation (IBP), to compute bounds. IBP is fast and can scale to large networks. Despite being loose, IBP outperforms previous linear relaxation based methods in terms of training verifiably robust networks. Zhang [zhang2020towards] further improve this method by combining IBP with the tighter linear relaxation based method, CROWN. The resulting method is named CROWNIBP. They use CROWNIBP to compute bounds at the initial training phase and achieve the lowest verified errors.We notice that both IBP trained networks [gowal2019scalable] and CROWNIBP trained networks [zhang2020towards] are verified by IBP after training. One natural question is whether we can use tighter linear relaxation based methods to verify the networks to achieve lower verified error. Surprisingly, Zhang [zhang2020towards] find the typically much tighter method, CROWN, gives very loose bounds for IBP trained networks. It seems that IBP trained networks have very different verification properties from normally trained networks. We also find that CROWN cannot verify large networks due to its high memory cost. Another phenomenon we observe on IBP and CROWNIBP trained networks is that most neurons become dead during training. We believe that this could restrict the representation capability of the network and thus hurt its performance. In this paper, we make the following contributions to tackle the aforementioned problems:
We develop a relaxed version of CROWN, linear bound propagation (LBP), which has better scalability. We demonstrate LBP can be used to obtain tighter bounds than IBP on both normally trained networks or IBP trained networks.
We prove IBP is a special case of CROWN and LBP. The reason that CROWN gives looser bounds than IBP on IBP trained networks is that CROWN chooses bad bounding lines when making linear relaxations of the nonlinear activation functions. We prove CROWN and LBP are always tighter than IBP if they adopt the tight strategy to choose bounding lines as shown in Figure 1.
We propose to use a new activation function, parameterized ramp function (ParamRamp), to train verifiably robust networks. Compared with ReLU, where most neurons become dead during training, ParamRamp brings more diversity of neuron status. Our experiments demonstrate networks with ParamRamp activation achieve stateoftheart verified robustness on MNIST, CIFAR10 and TinyImageNet.
In this section, we start by giving definition of an
layer feedforward neural network and then briefly introduce the concept of robustness verification. Next we present interval bound propagation, which is used to train networks with best verified errors. Finally we review two stateoftheart verifiable adversarial training methods
[gowal2019scalable, zhang2020towards] that are most related to our work.(1) 
are the weight matrix, bias, activation and preactivation of the th layer in the network, respectively. is the elementwise activation function. Note that we always assume is a monotonic increasing function in rest part of the paper. and are the input and output of the network. We also use to denote the number of neurons in the th layer and
is the dimension of the input. Although this network only contains fully connected layers, our discussions on this network in rest part of the paper readily generalize to convolutional layers as they are essentially a linear transformation as well
[boopathy2019cnn].Robustness verification aims to guarantee a neural network gives consistent predictions for all inputs in some set, typically an ball around the original input: , where is the clean image. The key step is to compute the lower and upper bounds of the output logits (or the lower bound of the margin between ground truth class and other classes as defined in (4)) when the input can take any value in . We can guarantee that the network gives correct predictions for all inputs in if the lower bound of the ground truth class is larger than the upper bounds of all the other classes (or the lower bound of the margin is greater than ). The verified robustness of a network is usually measured by the verified error: The percentage of images that we can not guarantee that the network always gives correct predictions for inputs in . Note that the verified error not only depends on the network and the allowed perturbation of the input, but also the method we use to compute bounds for the output. CROWN and IBP are the two bounding techniques that are most related to our work. We briefly walk through CROWN in Section 3 and introduce IBP right below.
Assume we know the lower and upper bounds of the activation of the th layer: . Then IBP computes bounds of , and , in the following way:
(2) 
where is the elementwise ReLU function and is the elementwise version of the function , if ; , else. Next, bounds of , and , can be computed by
(3) 
IBP repeats the above procedure from the first layer and computes bounds layer by layer until the final output as shown in Figure 2(b). Bounds of is known if the allowed perturbation is in an ball. Closed form bounds of can be computed using Holder’s inequality as shown in (13) if the allowed perturbation is in a general ball.
Verifiable adversarial training first use some robustness verification method to compute a lower bound of the margin between ground truth class and other classes:
(4) 
Here we use “
” to denote elementwise less than or equal to. For simplicity, we won’t differentiate operators between vectors and scalars in rest part of the paper when no ambiguity is caused. Gowal
[gowal2019scalable] propose to use IBP to compute the lower bound and minimize the following loss during training:(5) 
where is the underlying data distribution, is a hyper parameter to balance the two terms of the loss, and is the normal crossentropy loss. This loss encourages the network to maximize the margin between ground truth class and other classes. Zhang [zhang2020towards] argue that IBP bound is loose during the initial phase of training, which makes training unstable and hard to tune. They propose to use a convex combination of the IBP bound and CROWNIBP bound as the lower bound to provide supervision at the initial phase of training:
(6) 
The loss they use is the same as the one in (5) except for replacing with the new defined in (6). They design a schedule for : It starts from and decreases to during training. Their approach achieves stateoftheart verified errors on MNIST and CIFAR10 datasets. Xu [xu2020automatic] propose a loss fusion technique to speed up the training process of CROWNIBP and this enables them to train large networks on large datasets such as TinyImageNet and Downscaled ImageNet.
CROWN is considered an efficient robustness verification method compared with LP based methods [weng2018towards, zhang2018crown, lyu2020fastened]
, but these works only test CROWN on small multilayer perceptrons with at most several thousand neurons in each hidden layer. Our experiment suggests that CROWN scales badly to large convolutional neural networks (CNNs): It consumes more than
GB memory when verifying a single image from CIFAR10 for a small layer CNN (See its detailed structure in Appendix B.1), which prevents it from utilizing modern GPUs to speed up computation. Therefore, it is crucial to improve CROWN’s scalability to employ it on large networks. To this end, we develop a relaxed version of CROWN named Linear Bound Propagation (LBP), whose computation complexity and memory cost grow linearly with the size of the network. We first walk through the deduction process of the original CROWN.Suppose we want to compute lower bound for the quantity . and are the weight and bias that connect to the quantity of interests. For example, the quantity becomes the margin if we choose appropriate and set . Assume we already know the bounds of preactivation of the th layer:
(7) 
Next CROWN finds two linear functions of to bound in the intervals determined by .
(8) 
where
(9) 
Here we use “” to denote elementwise product. are constant vectors of the same dimension of . We use in the superscripts to denote quantities related to lower bounds and upper bounds, respectively. We also use in the superscripts to denote “lower bounds or upper bounds”. The linear functions are also called bounding lines, as they bound the nonlinear function in the intervals determined by . See Figure 1 for a visualization of different strategies to choose bounding lines.
Next CROWN utilizes these bounding lines to build a linear function of to lower bound :
(10) 
See the detailed formulas of in Appendix A.1. In the same manner, CROWN builds a linear function of to lower bound if bounds of are known. CROWN repeats this procedure: It backpropagates layer by layer until the first layer as shown in Figure 2(a):
(11) 
Notice . We plug it in the last term of (11) and obtain a linear function of .
(12) 
where . Now we can compute the closedform lower bound of through Holder’s inequality:
(13) 
where and denotes a column vector that is composed of the norm of every row in . We can compute a linear function of to upper bound in the same manner and then compute its closedform upper bound. See details in Appendix A.1.
Let’s review the process of computing bounds for . It requires us to know the bounds of the previous layers: . We can fulfill this requirement by starting computing bounds from the first layer , and then computing bounds layer by layer in a forward manner until the th layer. Therefore, the computation complexity of CROWN is of the order . And its memory cost is of the order , where , and is the number of neurons in the th layer. This is because we need to record a weight matrix between any two layers as shown in (11). This makes CROWN difficult to scale to large networks. To this end, we propose a relaxed version of CROWN in the next paragraph.
As the same in the above CROWN deduction process, suppose we want to compute bounds for the quantity . In the original CROWN process, we first compute linear functions of to bound the preactivation of the first layers:
(14) 
and use these linear functions of to compute closedform bounds for the first layers. We argue that in the backpropagation process in (11), we don’t need to backpropagate to the first layer. We can stop at any intermediate layer and plug in the linear functions in (14) of this intermediate layer to get a linear function of to bound . Specifically, assume we decide to backpropagate layers:
(15) 
We already know
We can directly plug it to (15) to obtain a lower bound of :
(16) 
Now the last line of (16) is already a linear function of and we can compute the closedform lower bound of in the same manner as shown in (13). The upper bound of can also be computed by backpropagating only layers in the same gist.
We have shown we can only backpropagate layers, instead of backpropagating to the first layer, when computing bounds for the th layer. In fact, we can only backpropagate layers when computing bounds for any layer. If the layer index is less than or equal to , we just backpropagate to the first layer. In other words, we backpropagate at most layers when computing bounds for any layer in the process of CROWN. We call this relaxed version of CROWN RelaxedCROWN. See a comparison of CROWN and RelaxedCROWN in Figure 2(a).
We are particularly interested in the special case of RelaxedCROWN, namely, we only backpropagate layer in the process of CROWN. This leads us to the following theorem.
Assume we already know two linear functions of to bound :
We then compute the closedform bounds of using these two linear functions, and choose two linear functions to bound as shown in (9). Then under the condition that , can be bounded by
(17) 
where
(18) 
where the operator “” between a matrix and a vector is defined as .
We refer readers to Appendix A.3 for the formulas of and the proof of Theorem 1. Note that the condition in Theorem 1 is not necessary. We impose this condition because it simplifies the expressions of and it generally holds true when people choose bounding lines.
The significance of Theorem 1 is that it allows us to compute bounds starting from the first layer , which can be bounded by , and then compute bounds layer by layer in a forward manner until the final output just like IBP. The computation complexity is reduced to and memory cost is reduced to , since we only need to record a matrix from the input to every intermediate layer . We call this method Linear Bound Propagation (LBP), which is equivalent to RelaxedCROWN. See a comparison of LBP and IBP in Figure 2(b). As expected, there is no free lunch. As we will show in the next section, the reduction of computation and memory cost of LBP makes it less tight than CROWN. Although developed from a different perspective, we find LBP similar to the forward mode in the work [xu2020automatic]. See a detailed comparison between them in Appendix A.3.
Zhang [zhang2020towards] propose to compute bounds for the first layers using IBP and then use CROWN to compute bounds for the last layer to obtain tighter bounds of the last layer. The resulting method is named CROWNIBP. In the same gist, we can use LBP to compute bounds for the first layers and then use CROWN to compute bounds for the last layer. We call this method CROWNLBP.
In Section 3, we develop a relaxed version of CROWN, LBP. In this section, we study the relationship between IBP, LBP and CROWN, and investigate why CROWN gives looser bounds than IBP on IBP trained networks [zhang2020towards].
First, we manage to prove IBP is a special case of CROWN and LBP where the bounding lines are chosen as constants as shown in Figure 1(a):
(19) 
In other words, CROWN and LBP degenerate to IBP when they choose constant bounding lines for every neuron in every layer. See the proof of this conclusion in Appendix A.5. On the other hand, Lyu [lyu2020fastened] prove tighter bounding lines lead to tighter bounds in the process of CROWN, where is defined to be tighter than in the interval if
(20) 
We manage to prove it is also true for LBP in Appendix A.3. Therefore, if CROWN and LBP adopt the tight strategy in Figure 1(b) to choose bounding lines, which is guaranteed to be tighter than the constant bounding lines in a specified interval, CROWN and LBP are guaranteed to give tighter bounds than IBP. We formalize this conclusion and include conclusions for CROWNIBP and CROWNLBP in the following theorem.
Assume the closedform bounds of the last layer computed by IBP, CROWNIBP, LBP, CROWNLBP, and CROWN are , ; , ; , ; , ; , , respectively. And CROWNIBP, LBP, CROWNLBP, CROWN adopt the tight strategy to choose bounding lines as shown in Figure 1(b). Then we have
(21) 
where the sets in the inequalities mean that the inequalities hold true for any element in the sets.
See proof of Theorem 2 in Appendix A.6. Now we can answer the question proposed at the beginning of this section. The reason that CROWN gives looser bounds than IBP [zhang2020towards] is because CROWN uses the adaptive strategy as shown in Figure 1(c) and 1(d) to choose bounding lines by default. The lower bounding line chosen in the adaptive strategy for an unstable neuron is not always tighter than the one chosen by the constant strategy adopted by IBP. Zhang [zhang2018crown] emperically show the adaptive strategy gives tighter bounds for normally trained networks. An intuitive explanation is that this strategy minimizes the area between the lower and upper bounding lines in the interval, but there is no guarantee for this intuition. On the other hand, for IBP trained networks, the loss is optimized at the point where bounding lines are chosen as constants. Therefore we should choose the same constant bounding lines or tighter bounding lines for LBP or CROWN when verifying IBP trained networks, which is exactly what we are doing in the tight strategy.


Adaptive  IBP  C.IBP  LBP  C.LBP 
Verified Err(%)  70.10  85.66  100  99.99 
Lower Bound  2.1252  12.016  2.4586E5  1.5163E5 
Tight  IBP  C.IBP  LBP  C.LBP 
Verified Err(%)  70.10  70.01  70.05  69.98 
Lower Bound  2.1252  2.1520  2.1278  2.1521 

We conduct experiments to verify our theory. We first compare IBP, LBP and CROWN on a normally trained MNIST classifier (See its detailed structures in Appendix B.1). Result is shown in Figure 3. The average verification time for a single image of IBP, CROWNIBP, LBP, CROWNLBP, CROWN are 0.006s, 0.011s, 0.027s, 0.032s, 0.25s, respectively, tested on one NVIDIA GeForce GTX TITAN X GPU. We can see LBP is tighter than IBP while being faster than CROWN. And the adaptive strategy usually obtains tighter bounds than the tight strategy. See more comparisons of these methods in Appendix B.2.
Next, we compare them on an IBP trained network. The network we use is called DMlarge (See its detailed structure in Appendix B.1), which is the same model in the work[zhang2020towards, gowal2019scalable]. Results are shown in Table 1
. We don’t test CROWN on this network because it exceeds GPU memory (12 GB) and takes about half an hour to verify a single image on one Intel Xeon E52650 v4 CPU. We can see CROWNIBP, LBP and CROWNLBP give worse verified errors than IBP when adopting adaptive strategy to choose bounding lines, but give better results when adopting the tight strategy as guaranteed by Theorem
2. However, we can see the improvement of LBP and CROWNLBP over IBP and CROWNIBP is small compared with the normally trained network. We investigate this phenomenon in the next section.This section starts by investigating the phenomenon discovered in Section 4: Why the improvement of LBP and CROWNLBP over IBP and CROWNIBP is so small on the IBP trained network compared with the normally trained network. Study of this phenomenon inspires us to design a new activation function to achieve lower verified errors.
We argue that the limited improvement of LBP and CROWNLBP is because most neurons are dead in IBP trained networks. Recall that we define three status of a ReLU neuron according to the range of its input in Figure 1: Dead, Alive, Unstable. We demonstrate neuron status in each layer of an IBP trained network in Figure 4. We can see most neurons are dead. However, we find most neurons (more than 95%) are unstable in a normally trained network. For unstable neurons, bounding lines in the tight strategy adopted by LBP and CROWN are tighter than the constant bounding lines chosen by IBP. This explains why LBP and CROWN are several orders tighter than IBP for a normally trained network. However, for dead neurons, the bounding lines chosen by LBP and CROWN are the same as those chosen by IBP, which explains the limited improvement of LBP and CROWNLBP on IBP trained networks. We conduct experiments in Appendix B.3 to further verify this explanation.
It seems reasonable that most neurons are dead in IBP trained networks, since dead neurons can block perturbations from the input, which makes the network more robust. However, we argue that there are two major drawbacks caused by this phenomenon: First, gradients from both the normal crossentropy loss and IBP bound loss in (5) can not backpropagate through dead neurons. This may prevent the network from learning at some point of the training process. Second, it restricts the representation capability of the network, since most activations are in intermediate layers.
To mitigate these two problems, one simple idea is to use LeakyReLU instead of ReLU during training. We will consider this approach as the baseline and compare with it. We propose to use a Parameterized Ramp (ParamRamp) function to achieve better result. The Parameterized Ramp function can be seen as a LeakyReLU function with the right part being bent flat at some point , as shown in Figure 5. The parameter is tunable for every neuron. We include it to the parameters of the network and optimize over it during training. The intuition behind this activation function is that it provides another robust (function value changes very slowly with respect to the input) region on its right part. This right part has function values greater than and tunable, in comparison to the left robust region with function values close to . Therefore during the IBP training process, a neuron has two options to become robust: to become either left dead or right dead as shown in Figure 5. This could increase the representation capability of the network while allow it to become robust. We compare effects of ReLU, LeakyReLU and ParamRamp functions in terms of training verifiably robust networks in the next section.



Activation  Errors (%) for  Errors (%) for  Errors (%) for  
Clean  IBP  C.LBP  PGD  Clean  IBP  C.LBP  PGD  Model  Clean  IBP  PGD  


IBP  ReLU(0)  39.22  55.19  54.38  50.40  58.43  70.81  69.98  68.73 





ReLU(0.01)  32.3  52.02  47.26  44.22  55.16  69.05  68.45  66.05  
ReLU(0.010)  34.6  53.77  51.62  46.71  55.62  68.32  68.22  65.29  
Ramp(0)  36.47  53.09  52.28  46.52  56.32  68.89  68.82  63.89  
Ramp(0.01)  33.45  48.39  47.19  43.87  54.16  68.26  67.78  65.06  
Ramp(0.010)  34.17  47.84  47.46  42.74  55.28  67.26  67.09  60.39  



ReLU(0)  28.48  46.03  45.04  40.28  54.02  66.94  66.69  65.42 





ReLU(0.01)  28.49  46.68  44.09  39.29  55.18  68.54  68.13  66.41  
ReLU(0.010)  28.07  46.82  44.40  39.29  63.88  72.28  72.13  70.34  
Ramp(0)  28.48  45.67  44.03  39.43  52.52  65.24  65.12  62.51  
Ramp(0.01)  28.63  46.17  44.28  39.61  52.15  66.04  65.75  63.85  
Ramp(0.010)  28.18  45.74  43.37  39.17  51.94  65.19  65.08  62.05  




Activation  Errors (%) for  
Clean  IBP  C.LBP  PGD  
IBP  ReLU(0)  2.74  14.80  16.13  11.14  
Ramp(0.010)  2.16  10.90  10.88  6.59  

ReLU(0)  2.17  12.06  11.90  9.47  
Ramp(0.010)  2.36  10.68  10.61  6.61  

In this section, we conduct experiments to train verifiably robust networks using our proposed activation function, ParamRamp, and compare it with ReLU and LeakyReLU. We use the loss defined in (5) and consider robustness in all experiments. The experiments are conducted on datasets: MNIST, CIFAR10, and TinyImageNet. For MNIST and CIFAR10 datasets, we use the same DMlarge network, and follow the same IBP training and CROWNIBP training procedures in the works [gowal2019scalable, zhang2020towards]. For the TinyImageNet dataset, we follow the training procedure in the work [xu2020automatic]. The networks we train on TinyImageNet are a
layer CNN with Batch Normalization layers (CNN7+BN) and a WideResNet. We refer readers to the original works or Appendix B.4 for detailed experimental setups and network structures. During the training of ParamRamp networks, it is important to initialize the tunable parameters
appropriately. We also find ParamRamp networks have overfitting problems in some cases. See how we initialize and solve the overfitting problem in Appendix B.4. After training, we use IBP and CROWNLBP with the tight strategy to compute verified errors. IBP verified errors allow us to compare results with previous works, and CROWNLBP gives us the best verified errors as guaranteed in Theorem 2. CROWN is not considered because it exceeds GPU memory (12 GB) to verify a single image on the networks we use and is extremely slow running on CPU. We also use 200step PGD attacks [madry2018towards] with 10 random starts to emperically evaluate robustness of the networks.Results on CIFAR10 and MNIST datasets are presented in Table 2 and Table 3, respectively. We can see networks with ParamRamp activation achieve better verified errors, clean errors, and PGD attack errors than ReLU networks in almost all settings. And our proposed bound computation method, CROWNLBP, can always provide lower verified errors than IBP. See more experiments for networks of different structures in Appendix B.5. For TinyImageNet dataset, the CNN7+BN and WideResNet networks with ParamRamp activation achieve and IBP verified errors at , respectively. To the best of our knowledge, is the best verified error at ever achieved on TinyImageNet. See a comparison with ReLU networks from the work [xu2020automatic] in Appendix B.5.
ParamRamp activation brings additional parameters to the network. We are concerned about its computational overhead compared with ReLU networks. On MNIST, we find the average training time per epoch of a ParamRamp network is
times of that of a ReLU network in IBP training, and is times in CROWNIBP training. We observe an overhead of similar level on CIFAR10 and TinyImageNet datasets. See a full comparison in Appendix B.5. Comparing ParamRamp with ReLU on the same network may not be convincing enough to demonstrate the superiority of ParamRamp, as it has additional parameters. We compare it with larger size ReLU networks trained in the work [xu2020automatic]. We report their results on CNN7+BN, Densenet [huang2017densely], WideResNet [zagoruyko2016wide] and ResNeXt [xie2017aggregated] in the right part of Table 2. Despite being larger than the DMlarge network with ParamRamp activation, these ReLU networks still can not obtain lower IBP verified errors than our model. We think this is because ParamRamp activation brings more diversity of neuron status, which increases the representation capability of the network. Recall that most neurons are dead in IBP trained ReLU networks as shown in Figure 4. We present neuron status of an IBP trained ParamRamp network in Figure 6. We can see although lots of neurons are still left dead, there is a considerable amount of neurons are right dead. Note that the activation value of right dead neurons are not and tunable. This allows the network to become robust while preserving representation capability. See more neuron status comparisons of ReLU and ParamRamp networks in Appendix B.5.We propose a new verification method, LBP, which has better scalability than CROWN while being tighter than IBP. We further prove CROWN and LBP are always tighter than IBP when choosing appropriate bounding lines, and can be used to verify IBP trained networks to obtain lower verified errors. We also propose a new activation function, ParamRamp, to mitigate the problem that most neurons become dead in ReLU networks during IBP training. Extensive experiments demonstrate networks with ParamRamp activation outperforms ReLU networks and achieve stateoftheart verified robustness on MNIST, CIFAR10 and TinyImageNet datasets.
This work is partially supported by General Research Fund (GRF) of Hong Kong (No. 14203518), Collaborative Research Grant from SenseTime Group (CUHK Agreement No. TS1712093, and No. TS1711490), and the Shanghai Committee of Science and Technology, China (Grant No. 20DZ1100800).