I Introduction
Neural networks are extensively used in machine learning systems for their effectiveness in controlling complex systems in a variety of research activities such as focusing on the promise of neural networks in controlling nonlinear systems in
[5], providing a neural network control method for a general type of nonlinear system in [2], controlling industrial processes using adaptive neural network in [11], investigating the problem of sampled data stabilization for neural network control systems with a guaranteed cost in [13], controlling autonomous vehicles using deep neural networks in [9], etc. However, small perturbations in the input can significantly distort the output of the neural network.The robustness and reliability of neural networks have received particular attention in the machine learning community. For instance, adding Lyapunov constraints in the training neural networks to enhance stabilization of learning nonlinear system in [8], introducing performancedriven BP learning for edge decisionmaking in [3], studying the adversarial robustness of neural networks using robust optimization [7], adversarial neural pruning and suppressing latent vulnerability by proposing a Bayesian framework in [6]
, studying the method to learn deep ReLUbased classifiers that are provably robust against normbounded adversarial perturbations in
[12], identifying a tradeoff between robustness and accuracy in the design of defenses against adversarial examples in [20], an intelligent bearing fault diagnosis method based on stacked inverted residual convolution neural networks is proposed in
[19], etc. Yet by adding adversarial examples in the training data may not improve the worstcase performance of neural networks directly, and existing robust optimization frameworks for training neural networks are focusing on training classifiers. There’s an increasing demand for neural networks with a certain degree of immunity to perturbations and the method for guaranteed approximation error estimation between the system and its neural network approximation in safetycritical scenarios. The key to training robust neural networks is to mitigate perturbations. Based on the methods proposed in
[15, 16, 17, 10, 14], the output reachability analysis of neural networks can be performed in presence of perturbations in input data .According to the Universal Approximation Theorem, a shallow neural network could be able to solve any nonlinear approximation problems [4]. In this work, a shallow neural network is considered for saving computing resources with its simple structure and the less conservative output of which can be obtained using reachability methods. This paper aims to train a robust neural network by mitigating the effects caused by perturbations using reachable set methods. Specifically, we form the output reachable set from the input disturbed data described by the intervalbased reachability method. After forming the reachable set of the hidden layer, the training of shallow neural network is performed by solving robust leastsquare problems as an extension of the Extreme Learning Machine (ELM) method proposed in [4]. The neural network trained by the proposed robust optimization framework is then compared with ELM in terms of reachable set estimation and the mean square error in presence of perturbations in input data, as shown in a robot modeling example. The results indicate that the robustness using the proposed method has been improved and the tradeoff between accuracy and robustness is tolerable when it comes to robust datadriven modeling problems.
The rest of the paper is organized as follows: preliminaries and problem formulation are given in Section II. The main result, robust optimization training for shallow neural networks using reachability method is presented in Section III. In Section IV, a robot arm modeling example is provided to evaluate our method. Conclusions are given in Section V.
Notations: denotes the set of natural numbers, represents the field of real numbers, and
is the vector space of all
tuples of real numbers, is the space of matrices with real entries. Given a matrix , the notation means is real symmetric and positive definite. Given a matrix , is the vectorization of . and stand for Euclidean norm and infinity norm, respectively. In addition, in symmetric block matrices, we use * as an ellipsis for the terms that are induced by symmetry.Ii Preliminaries and Problem formulation
Iia Shallow Feedforward Neural Networks
A neural network consists of hidden layers with information flow from input layer to output layer. Each layer consists of processing neurons which respond to the weighted input receiving from other neurons in the form of:
(1) 
where is the th input of the th neuron, is the number of inputs for the th neuron, is the output of the th neuron, is the weight from th neuron to th neuron, is the bias of the th neuron, and
is the activation function.
The feedforward neural network considered in this paper is a class of neural networks with no cycles and loops. Each layer is connected with nearby layer by the weight matrix and with a bias vector in the form of
(2)  
(3) 
in which denotes the number of neurons in layer , thus the output of th layer can be described by
(4) 
where is the output vector of layer .
Specifically, we consider a shallow feedforward neural network with one hidden layer and the output layer , the mapping from the input vector of input layer to the output vector can be expressed in the form of
(5) 
where .
Remark 1
Compared with deep neural networks with multiple hidden layers, shallow neural networks normally consist of one hidden layer. Shallow neural networks represent a significant class of neural networks in the machine learning field, e.g., Extreme Learning Machine (ELM) [4], Broad Learning Systems (BLS) [1]. Notably, these shallow neural networks are learners with universal approximation and classification capabilities provided with sufficient neurons and data, e.g., as shown in [4], [1].
IiB Reachability of Neural Networks
In this paper, we introduce the concept of reachable set of neural networks for robust training. Some critical notions and results related to the reachability of neural networks are presented as follows.
Given interval set of th layer, in which are the lower and upperbond of the th layer’s output. The interval set for layer of is defined by
(6) 
Specifically, the following trivial assumption, which is satisfied by most of the activation functions, is given for the computation of set .
Assumption 1
Assume that the following inequality for activation function
(7) 
holds for any two scalars .
Lemma 1
Given a shallow feedforward neural network (5) and an input set , then the output reachable set of and can be computed by
(8)  
(9) 
where and are
(10)  
(11) 
where .
The proof is given in Appendix A.
Remark 2
Lemma 1 provides a formula to compute the reachable set for both the hidden layer as well as output layer. As indicated in [18], the interval arithmetic computation framework might lead to overly conservative overapproximation as the number of hidden layers grows large. However, for shallow neural networks considered in this paper, there is only one hidden layer so that this interval arithmetic computation framework proposed in Lemma 1 performs well in practice.
IiC Problem Formulation
This paper aims to provide the method of training neural networks to enhance their robustness and reliability using the reachable set.
Given arbitrary distinct inputoutput samples with and , shallow neural network training aims to find weights and biases , , for the following optimization problem of
(12) 
where and are
(13)  
(14) 
In this paper, we utilize the ELM proposed in [4] to train shallow neural networks. Unlike the most common understanding that all the parameters of neural networks, i.e., , , , need to be adjusted, weights , and the hidden layer biases are in fact not necessarily tuned and can actually remain unchanged once random values have been assigned to these parameters in the beginning of training. By further letting and linear functions as output activation functions as in ELM training, one can rewrite where
(15) 
in which . Then, the training process is then can be formulated in the form of
(16) 
To incorporate robustness in the training process in (16), the neural network is expected to be robust to the disturbances injected in the input. Therefore, input data are generalized from points to intervals containing perturbations in the data, i.e., input data are purposefully crafted to where , with representing perturbations. The interval data set is denoted by . Moreover, the trained neural network is expected to be capable of mitigating the changes caused by perturbations as small as possible, thus the target data set is expected to stay the same, i.e., .
With the interval data set , the robust training problem can be stated as follows, which is the main problem to be addressed in this paper.
Problem 1
Given arbitrary distinct inputoutput samples with and , and also considering perturbations , how does one compute weights and biases , , for the following robust optimization problem of
(17) 
where is defined by (14) and are
(18) 
in which , with .
Remark 3
From (17), the weights and biases in neural networks are optimized to mitigate the negative effects brought in by perturbations in the training process. Moreover, in this paper, we will utilize ELM for shallow neural network training, thus the robust optimization problem (17) can be converted to the following optimization problem
(19) 
where
(20) 
Iii Robust Optimization Training Framework
In this section, a robust optimizationbased training method for shallow neural networks will be presented. As we utilize the ELM training framework as proposed in [4], the hidden layer weights , and the hidden layer biases can be randomly assigned. With the random assignment of these parameters and using the reachability results proposed in Lemma 1, can be obtained.
Proposition 1
Using the results in Lemma 1, (21)–(24) can be obtained straightforwardly by letting , and . The proof is complete.
With the interval matrix , the next critical step in robust training of shallow neural networks is solving robust optimization problem (19) to compute weights of output layer.
Proposition 2
In order to develop a tractable algorithm to solve (19), we propose the following equivalent representation for interval matrix in (3), i.e.,
(26) 
where and , are defined by
(27)  
(28) 
Therefore, the robust optimization problem (19) can be rewritten to
(31) 
Moreover, letting , we can see that will deduce . Furthermore, equals to
(32) 
Thus, we can formulate the following optimization problem
(33) 
for all , . It is noted that the solution of (III) also satisfies optimization problem (III).
Using Sprocedure and letting , we can formulate an optimization problem with as follows
(34) 
which ensures (III) holds.
Based on Schur Complement formula, it is equivalent to
(35) 
where , and . Therefore, we have that the optimized , which implies that . The proof is complete.
Remark 4
Proposition 2 suggests that robust optimization problem (19) can be formulated in the form of SemiDefinite Programming (SDP) problem, so that the weights of the output layer can be efficiently solved with the help of existing SDP tools. By solving the optimization problem (2), the obtained weights are designed to make the approximation error between disturbed input data set and target set as small as possible, which implies a robust training performance of shallow neural network (5).
Remark 5
Since the robust training process considers perturbations imposed on the input data set and optimizes weights to minimize the approximation error, the neural network tends to be able to tolerate noises better than those trained by the traditional training process. On the other hand, also due to the consideration of perturbations which are purposefully crafted in the input data set and is a player that is always playing against weights in robust optimization training, the approximation error would increase compared with traditional training. This is the tradeoff between robustness and accuracy in neural network training, and it will be illustrated in a training example later.
In summary, the robust optimization training algorithm for shallow neural networks is presented in Algorithm 1, which consists of three major components.
Initialization: Since we employ ELM to train shallow neural networks, the weights and biases of the hidden layer are randomly assigned. In addition, the biases of output layer is set to . According to [4], , and will remain unchanged in the rest of training process.
Reachable Set Computation: The reachability analysis comes into play for the computation of , i.e., the reachable set of hidden layer. The computation is carried out based on (21)–(24) in Proposition 1.
Robust Optimization: This is the key step to achieve robustness in training shallow neural networks. Based on Proposition 2, the robust optimization training process is converted to an SDP problem in the form of (2), which can be solved by various SDP tools.
Remark 6
As shown in SDP problem (2), the computational cost of Algorithm 1 heavily depends on the number of decision variables which is . The value of is normally dominated by the number of input data which usually is a large number as sufficient input data is normally required for desired training performance. To reduce the computational cost in practice, we can modify such as particularly letting to relax the computational burden caused by a large number of input data. However, the price to pay here is that the result is a suboptimal solution instead of the optimal solution to (2).
Iv Evaluation
In this section, a learning forward kinematics of a robotic arm model with two joints proposed in [16] is used to evaluate our developed robust optimization training method. The robotic arm model is shown in Fig. 1.
The learning task is using a feedforward neural network to predict the position of the end with knowing the joint angles . The input space for is classified into three zones for its operations: Normal working zone , buffering zone and forbidden zone . The detailed formulation for this robotic arm model and neural network training can be found in [16].
To show the advantage of robust learning, we first train a shallow neural network using the traditional ELM method. Then, assuming the injected disturbances are . By using Lemma 1 and choosing the maximal deviation of outputs as the radius for all testing output data, the output reachable set for all perturbed inputs are shown in Fig. 2. Moreover, we train a shallow neural network using Algorithm 1, i.e., the robust optimization training method. The output sets for perturbed inputs are shown in Fig. 3. It can be explicitly observed that the neural network trained by the robust optimization method has tighter reachable sets which means the neural network is less sensitive to disturbances. Therefore, we can conclude that the neural network trained by the robust optimization method is more robust to noises injected in input data. On the other hand, comparing Figs. 2 and 3, the deviation of neural network output from output data is increased by observation, i.e, the training accuracy is sacrificed for improving robustness.
Furthermore, the tradeoff between robustness and accuracy mentioned in Remark 5 are elucidated in Table I. It can be seen that robust learning provides a better tolerance in input data noises but yields less accuracy than the traditional learning process, i.e., a larger Mean Square Error (MSE).
Method  Radius  MSE 

Traditional ELM  
Algorithm 1 
V Conclusions
A robust optimization learning framework is proposed in this paper for shallow neural networks. First, the input set data are generalized to interval sets to characterize injected noises. Then based on the layerbylayer reachability analysis for neural networks, the output sets of the hidden layer are computed, which play a critical role in the robust optimization training process. The robust training problem is formulated in terms of robust leastsquares problems, which can be then converted to an SDP problem. The tradeoff between robustness and training accuracy is observed in the proposed framework. A robot arm modeling example is provided to evaluate our method.
References
 [1] (2017) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Transactions on Neural Networks and Learning Systems 29 (1), pp. 10–24. Cited by: Remark 1.
 [2] (1999) Adaptive neural network control of nonlinear systems by state and output feedback. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 29 (6), pp. 818–828. Cited by: §I.
 [3] (2020) Learnability and robustness of shallow neural networks learned with a performancedriven bp and a variant pso for edge decisionmaking. arXiv preprint arXiv:2008.06135. Cited by: §I.
 [4] (2006) Extreme learning machine: theory and applications. Neurocomputing 70 (13), pp. 489–501. Cited by: §I, §IIC, §III, §III, Remark 1.
 [5] (1992) Neural networks for control systems—a survey. Automatica 28 (6), pp. 1083–1112. Cited by: §I.
 [6] (2020) Adversarial neural pruning with latent vulnerability suppression. In International Conference on Machine Learning, pp. 6575–6585. Cited by: §I.

[7]
(2017)
Towards deep learning models resistant to adversarial attacks
. arXiv preprint arXiv:1706.06083. Cited by: §I.  [8] (2013) Neural learning of stable dynamical systems based on datadriven lyapunov candidates. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1216–1222. Cited by: §I.
 [9] (2018) Deeptest: automated testing of deepneuralnetworkdriven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, pp. 303–314. Cited by: §I.
 [10] (2019) Starbased reachability analysis of deep neural networks. In International Symposium on Formal Methods, pp. 670–686. Cited by: §I.
 [11] (2015) A combined adaptive neural network and nonlinear model predictive control for multirate networked industrial process control. IEEE Transactions on Neural Networks and Learning Systems 27 (2), pp. 416–425. Cited by: §I.
 [12] (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5286–5295. Cited by: §I.
 [13] (2014) Exponential stabilization for sampleddata neuralnetworkbased control systems. IEEE Transactions on Neural Networks and Learning Systems 25 (12), pp. 2180–2190. Cited by: §I.
 [14] (2018) Reachability analysis and safety verification for neural network control systems. arXiv preprint arXiv:1805.09944. Cited by: §I.
 [15] (2017) Reachable set computation and safety verification for neural networks with relu activations. arXiv preprint arXiv:1712.08163. Cited by: §I.
 [16] (2018) Output reachable set estimation and verification for multilayer neural networks. IEEE Transactions on Neural Networks and Learning Systems 29 (11), pp. 5777–5783. Cited by: §I, §IIB, §IV, §IV.
 [17] (2018) Reachable set estimation and safety verification for piecewise linear systems with neural network controllers. In 2018 Annual American Control Conference (ACC), pp. 1574–1579. Cited by: §I.
 [18] (2020, DOI: 10.1109/TNNLS.2020.2991090) Reachable set estimation for neural network control systems: a simulationguided approach. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §IIB, Remark 2.
 [19] (2020) A lightweight neural network with strong robustness for bearing fault diagnosis. Measurement 159, pp. 107756. Cited by: §I.
 [20] (2019) Theoretically principled tradeoff between robustness and accuracy. In International Conference on Machine Learning, pp. 7472–7482. Cited by: §I.
Comments
There are no comments yet.