Local Repair of Neural Networks Using Optimization

09/28/2021 ∙ by Keyvan Majd, et al. ∙ Arizona State University University of Colorado Boulder 0

In this paper, we propose a framework to repair a pre-trained feed-forward neural network (NN) to satisfy a set of properties. We formulate the properties as a set of predicates that impose constraints on the output of NN over the target input domain. We define the NN repair problem as a Mixed Integer Quadratic Program (MIQP) to adjust the weights of a single layer subject to the given predicates while minimizing the original loss function over the original training domain. We demonstrate the application of our framework in bounding an affine transformation, correcting an erroneous NN in classification, and bounding the inputs of a NN controller.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Artificial Neural Networks (NN) have shown potential in enabling autonomy for tasks that are hard to formally define or specify. For instance, NN can be used to learn and replicate multi-robot behaviors [1, 2, 3], or to model complex aircraft collision avoidance protocols [4]. Driving is another example of such a task where not all driving situations can be predicted in advance and, therefore, it may be beneficial to extrapolate behaviors from previously learned scenarios. Thus, learning for driving [5, 6, 7] is gaining ground as a possible solution to the automated driving or driver assistance challenge problems.

Nevertheless, NN-based autonomous systems may produce unsafe behaviors when they encounter inputs in regions where an insufficient number of training samples was available. Multiple methods exist that attempt to discover such unsafe behaviors (also known as adversarial samples or counter-examples) for autonomous systems [8, 9, 10]. Other methods investigate classes of systems and NN for which provable properties can be established [11, 12, 13, 14].

In either case, an open challenge is how to repair or modify NN-based systems when they are found to not be safe. In the case of testing based methods (and some verification methods), adversarial samples are returned that can be added to the training set to improve the accuracy of the NN [15, 16, 17]. However, adding multiple adversarial samples to the training set might produce over-fitting, or even “oscillations”, i.e., adding training samples from one region may result in reduced accuracy in another region. On the other hand, many verification methods may not even return any information when an NN-based system fails to satisfy its safety requirements. Hence, repairs may not be as straightforward as when counterexamples are available.

In this paper, we make progress toward the repair problem for NN-based systems. We pose the following question: given a set of safety constraints on the output of the NN component, is it possible to modify the NN such that the safety constraints are satisfied while the NN accuracy over the original training set remains high? We propose an optimization based solution by posing the repair problem as a Mixed-Integer Quadratic Program (MIQP). Our framework is presented on a number of challenging problems relevant to robotics: forward kinematics of arm manipulators, aircraft collision avoidance, and mobile robot motion control.

The repair problem [18, 19, 20] has objectives that typically extend beyond just identifying adversarial samples to be added to the training set [15, 16, 17]. In [20], the authors address the repair problem for NN-based controllers with a Two-Level Lattice (TLL) architecture. The main step in the framework is a local repair to the controller which is only activated in the states in the vicinity of any adversarial samples. A condition for applying the repair is that any discovered unsafe regions do not overlap with previously trained regions. A limitation of the method in [20] is that it is only applicable to the NN architecture. In contrast, our approach is applicable to general feed forward NN and there are no conditions on the adversarial samples.

The repair process in [19]

is executed in the loop with a verification engine. Given an adversarial sample, the neighborhood around it is partitioned until a region is identified where for every point in that region the safety constraints are violated. Then, the neurons that cause the violation are identified and they are repaired. Even though the method applies to general feedforward NN, the repair formulation does not capture the effect of the neuron modifications to the performance of the NN overall.

Finally, the authors in [18] propose a verification-based method for minimally modifying the network weights so as to satisfy a given property. Due to the use of verification, they guarantee the correctness of the repaired model. However, this work restricts the repair to the last layer and only guarantees the minimality of the weight changes. Their method is also limited to guaranteed modification of only one adversarial input.

The following are the main contributions of this paper. First, we propose a layer-wise neural network repair framework using Mixed-integer Quadratic Programming (MIQP) to satisfy constraints on the output of the network for a given input domain (Sec. III). Second, while our method repairs the network for the specified local input domain, it maintains the prediction/classification accuracy of the network over its original trained space. Finally, we demonstrate the application of our method in addressing safety, model improvement, and verification problems in a variety of robot learning domains (Sec. IV).

Ii Problem Statement

Ii-a Notation

We denote the set of variables with . Let be a neural network with hidden layers, as shown in Fig. 1. The nodes values at each layer

are represented by a vector

, where denotes the dimension of layer . The network output is also denoted as . Each two subsequent layers are fully connected with weighted edges and bias terms. The notation

represents the weight matrix

and the bias vector

of layer connecting the vectors to with

The training data set of inputs

and target outputs

is denoted by . We use to denote the th node value of layer for sample and to denote the vector of nodes at layer for sample

. The value of each hidden node is calculated using the weighted sum of the nodes in its previous layer passed through a nonlinear activation function.

In this work, we solve the repair problem for networks that use the Rectified Linear Unit (ReLU) activation function

. Thus, given the th sample, in the hidden layer , we have . An activation function is not applied to the last layer, so .

Ii-B Problem Formulation

Definition 1 (Repair Problem).

Let be a trained neural network over the training input-output space and be a constraint (predicate) on the output of for a set of inputs of interest . The Repair Problem is to modify the weight and bias terms of such that holds for the new neural network .

By repairing , we also aim to maintain the performance of the original network. Thus, the repaired network should both satisfy for the inputs and maintain the prediction/classification performance of for the inputs in the original training space domain . To satisfy the latter, the minimal network modification method proposed in [18] minimizes the -norm error between each two weight and bias vectors of and while satisfying . Although the authors in [18] showed that their method can successfully modify the network to correct the observed misclassified points in a classification problem, a minimal deviation from the original weights is not necessarily a sufficient guarantee for maintaining the prediction/classification performance of the original network over the whole input space . As an example, assume

is trained as a binary classifier (the highest output determines the classifier label for a given input) with one hidden layer (

) and each layer of size two. Assume the original weights and biases are

Assume is found as a misclassified point in for which the true label is . A minimal change of weight to can repair this network to correctly classify this point. However, this repair changes the label of the classifier for and creates a new misclassified point that was correctly classified in the original network. Hence, a subtle change in the weights may cause the network to significantly deviate from its original performance. Therefore, it is important for the repaired network to also minimize the defined loss function of the original network .

Fig. 1:

Multi-layer Perceptron

Definition 2 (Minimal Repair Problem).

Let be a trained neural network over the training input-output space and be a predicate that represents some constraints on the output of for the set of inputs of interest . The Minimal Repair Problem is to modify the weights of such that the new network satisfies while preserving the topology of and minimizing the original defined loss function over the original domain .

Iii Layer-wise Network Repair as a Mixed-Integer Quadratic Program (MIQP)

To solve the Minimal Repair Problem, we can formulate it as an optimization to minimize the loss function subject to and for . However, the resulting optimization problem is non-convex and difficult to solve due to the nonlinear ReLU activation function and high-order nonlinear constraints resulted from the multiplication of terms involving the weight/bias variables. An alternative approach is to obtain a sub-optimal solution by just focusing on a single layer repair. Layer-wise repair just modifies the weights and biases of a single layer to adjust the predictions so as to minimize and to satisfy . The layer-wise Minimal Repair Problem is defined as follows

Definition 3 (Layer-wise Minimal Repair Problem).

Let denote a trained neural network with hidden layers over the training input-output space and denote a predicate that represents some constraints on the output of for the set of inputs of interest . The Layer-wise Minimal Repair Problem is to modify the weights of a layer in such that the new network satisfies while preserving the topology of and minimizing the original defined loss function over the original domain .

In our optimization problem, we modeled the ReLU activation function with a disjunctive inequality constraint [21]. Assume is a Boolean variable and , where . In the disjunctive inequality (1), we have when () and when (). Here determines the output of ReLU activation function ,

(1)

Disjunctive Ineq. (1) can be relaxed into mixed integer algebraic equations using the Big-M or Convex-Hull reformulation [22, 23]. We also assume that the predicate is represented as a linear constraint.

Assumption 1.

The predicates are formulated as linear equality/inequality constraints.

Moreover, since and are not necessarily convex, we formulate the layer-wise minimal optimization problem over a data set sampled from and , where is the set of original target values of inputs in .

Remark 1.

The predicate defined over is not necessarily compatible with the target values in . It means that the predicate may bound the NN output for input space such that not allowing an input to reach its target value in . It is a natural constraint in many applications. For instance, due to the safety constraints, we may not allow a NN controller to follow its original control reference for a given unsafe set of input states.

For a given layer , we also define in the form of sum of square loss as follows

(2)

where and are the weight and bias vectors of the layer , respectively, and ( denotes the Euclidean norm). Here, since we only repair the weights and biases of the target layer , the loss term is a function of the weight matrix and bias vector of and , respectively. Hence, the weight and bias terms of all layers except the target layer are fixed, as shown in Fig. 2.

Fig. 2: Multi-layer Perceptron repair given a sample

Considering ReLU activation functions formulated by disjunctive Ineq. (1), the loss function (2) and Asmp. 1, we formulate the Layer-wise Minimal Repair Problem in Def. 3 as a Mixed Integer Quadratic Program (MIQP).

Repair Optimization Problem (R-OPT).

Let be a neural network with hidden layers, be a predicate, and be an input-output data set sampled from over the sets , , , and all as defined in Def. 3. Also, let be the loss function defined in (2). Repair the weight and bias vectors and of the th layer in (as shown in Fig. 2) so as to minimize the cost function (3) subject to the constraints (4)-(7g).

(3)
s.t.
(4)
(5a)
(5b)
(5c)
(5d)
(6)
(7e)
(7f)
(7g)

The layer-wise repair scheme is shown in Fig. 2 for repairing . The sample values of can be obtained by the weighted sum of the nodes in its previous layers starting from for all samples . In R-OPT, constraint (4) represents the forward pass of last layer . Constraint (5) represents the forward pass of layers for with ReLU activation function formulated as disjunctive functions (1). Constraint (6) is the given predicate on defined over . Finally, Constraint (7) bounds the error between the weight and bias terms and , and their corresponding original weight and bias terms and by . By adding to (3), we aim to minimize weights deviation as well as in the repair process.

Remark 2.

Since the output layer is not passed through a ReLU activation function, applying repair to the last layer does not create mixed integer constraints (5). Therefore, Minimal Repair Problem for the last layer is a Quadratic Program (QP).

Iv Empirical Experiments

We show the application of the Layer-wise Minimal Repair framework in bounding an affine transformation, applying constraints on a learned forward kinematics model, correcting an erroneous NN in a classification problem, and applying safety constraints on a NN controller. In all experiments, except the classification problem, we first train a neural network with training data uniformly sampled from the input-output training domain . We use our proposed repair formulation R-OPT to repair the network given the target layer and the defined predicate over the input space . In network repair, we use equal samples from both and

. We formulate the optimization using Pyomo open-source optimization language in Python

[24, 25], and Pyomo.GDP for modeling the disjunctive constraints [26]. We use Gurobi 9.1 as solver [27].

Iv-a In-place Rotation

Our method is first tested on a network that learned in-place rotation. We first train a two-hidden-layer network () with nodes in each layer to learn an in-place 45-degrees-counterclockwise rotation function in 2D space. The original input space is . The output space represents the set of counterclockwise rotated points in by degrees defined as . We repair to push all the outputs inside a Quadrilateral defined as

where denotes the Taxicab norm. Here, . We repair the network with samples.

Figure 3 (a) demonstrates network predictions after applying repair to the first layer . As it is shown, the predicate pushes the predicted outputs inside the green set. The bar plot in Fig. 3 (b) illustrates the Mean Square Error (MSE) of the network predictions after applying repair to different layers (showed with colored bars) compared to the original network predictions MSE (showed with dashed line). This plot separates the prediction errors of samples based on their targets whether located inside or outside of the green set. Clearly, prediction error for the points with targets inside the green set is lower than the points with targets outside the green set. The prediction performance improves by moving from the last layer to the first layer for both inside and outside cases. The large error reduction by hidden layers is due to the capture of non-linearities in their forward pass. However, the last layer only has the scaling factor, thereby, results smaller error reduction compared to the hidden layers. Moreover, to achieve a significant error reduction, we can select any hidden layer since moving among hidden layers does not result a large error change. In practice, we observed that repairing the latter hidden layers helps Gurobi solver to find a solution faster since the number of integer variables created by modeling ReLU activation function is increased by moving backward from each hidden layer .

In this experiment, the optimal weight deviations for each layer are , , and for the first, second, and third layers, respectively. It implies that repairing by moving backward from the last layer needs smaller weight deviation to adjust the network’s behavior.

Fig. 3: In-place rotation: (a) predictions after applying repair to the first layer, (b) Mean Square Error (MSE) between testing and original targets.

Iv-B Forward Kinematics

In our second test, we bound the output of a three-hidden-layer network () with nodes in each layer that learned the forward kinematics of a 6-DOF KUKA robot arm [28]. The forward kinematics function for this robot is a mapping from the joint angles to the location of end-effector in 3D space. To train the original network, we generate a forward kinematics data set using RoboAnalyzer simulator [29]. Here, the inputs to the are the and of each joint angle for ( inputs in total). It helps to remove the discontinuity that occurs by mapping to . Given the trigonometrically filtered input angles, we define the original input space as . The output space is then defined as , where the outputs are the coordinates of end-effector in 3D space. We repair the network to satisfy a predicate defined as for . Here, can be the space of a human worker who is jointly interacting with the robot. So, bounds the location of the end-effector to not operate in this space. We repair the network with samples. Figure 4 illustrates the results of applying network repairs on the forth and third layers. As shown in Fig. 4 (a), the repaired network successfully pushes the outputs inside the robot’s operation region and has an accurate prediction performance for the points inside this bound (see Fig. 4 (b)). Prediction errors also demonstrate that the repair of the third layer has lower error compared to the forth layer. In this experiment, the optimal weight deviations for each layer are , and for the third and forth layers, respectively.

Fig. 4: Forward kinematics: (a) output predictions, (b) Mean Square Error (MSE) between testing and original targets.

Iv-C Unmanned Aircraft Collision Avoidance system X (ACAS Xu)

ACAS Xu is a variant of Aircraft Collision Avoidance System X (ACAS X) that provides horizontal manoeuvre advisories for unmanned aircraft [30]. This system uses a look up table that produces 5 different horizontal advisory outputs of clear-of-conflict (), weak right/left heading rates of ( and ), and strong right/left heading rates of degrees per second ( and ), i.e. . The inputs of system are 7 sensor measurements of distance from ownship to intruder (), angle to intruder relative to ownership heading direction (), heading angle of intruder relative to ownship heading direction (), speed of ownship (), speed of intruder (), time until loss of vertical separation (), and previous advisory (). The advisory in with the minimum value is the selected advisory for a given input. To improve storage efficiency, an array of 45 Deep Neural Networks (DNN) are trained in [31] for the discretized combinations of and to learn the look up table. Inputs of each DNN are . Each DNN has 6 hidden layers with 50 ReLU activation nodes in each layer. DNNs return a horizontal advisory given the input sensor measurements. Given the application of this system, it is very important for the system to return correct advisories. In this experiment, we focus on the DNN trained for and . Following the look up table, the desired output property of this network is to return or horizontal advisories for the input space

Thus, the predicate is defined as To find input samples in that violate , we use Marabou verification tool proposed in [32]. Figure 5 illustrates the projection of local misclassified set of inputs and , shown with green and blue dots respectively, on and input spaces. Our method successfully corrects the local misclassified set of points and by applying our NN repair technique on the last layer. In this experiment, and for repairing and , respectively, that demonstrates the satisfaction of property with just a small deviation of last layer’s original weights.

Fig. 5: Collision Avoidance System: projection of repaired regions on and input spaces shown in green and blue. Misclassified points shown with dots. represents the average Euclidean distance between the two local repaired regions.

Iv-D Safe Controller

In our last experiment, we repair a mobile robot point-to-goal controller to avoid a static obstacle. Consider a robot that is following a unicycle model as

(8)

where states are and control inputs are . The variables , , denote the longitudinal and lateral positions of the robot and heading angle, respectively. The controls and

represent the linear and angular velocities of robot, respectively. We first use imitation learning to train a neural network controller

that imitates a QP-based Controller as an expert. receives the state of the robot as the input () and outputs the control action, i.e. (). The training samples are collected from . This controller steers the robot from an initial set of states to the goal set . Here is a continuously differentiable function

(9)

We model to have 2 hidden layers with 10 ReLU nodes per layer. Figure 6 (a) shows the trajectories of system under QP-based learned policy. Readers are referred to [33] for more details on using QP-based controllers in imitation learning.

Fig. 6: Safe controller: (a) simulated trajectories of system under learned policy by the original network, (b) simulated trajectories of system under repaired policy.

Now assume for the same control task, the robot needs to avoid an unsafe region (the ellipsoidal violet region in Fig. 6) defined as

(10)

where is a continuously differentiable safety measure that represents a static obstacle defined as

(11)

We treat as a Control Barrier Function (CBF) [34] to derive a safety predicate for the output of controller under which the states of system (8) that start in never enters the unsafe set .

Definition 4 (Control Barrier Function (CBF)).

Given the nonlinear control affine system (8) and an unsafe set , a continuously differentiable function is a CBF for this system if there exists an extended class function (strictly increasing and ) such that

(12)
(13)
(14)

where is the first order Lie derivative of .

If the control satisfies (14), then the state of system (8) never leaves . Our defined candidate function (11) satisfies the conditions (12) and (13) given the defined unsafe set (10). Now to satisfy (14), for all we impose the predicate to the output of , where . To ensure that the repaired controller steers the robot toward the goal set , we also treat , defined in (9), as a Lyapunov-like function.

Definition 5 (Lyapunov-like Function (CLF)).

Given the nonlinear control affine system (8) and a goal set , a continuously differentiable function is a CLF for this system if

(15)
(16)
(17)

If there exists a control that satisfies (17), then the states of system (8) reach the goal set where . Similar to the safety condition, , defined in (9), satisfies the conditions (15) and (16). So for all we impose to the output of to satisfy condition (17). Here is a slack variable that minimally relaxes the predicate when enforcing the goal-reaching constraint to contradicts the safety condition . For further details on Lyapunov-like functions, CBFs, and their application in safe mobile robot navigation, readers are referred to [34, 35, 36].

Given the linear inequality constraints and , we repair the last layer of with 100 reference trajectories. As illustrated in Fig. 6 (b), the generated trajectories of system (8) by the repaired policy successfully avoid the unsafe region showed in violet. The repaired controller decreases the linear and angular velocities and for the states close to the unsafe region while the repaired policy converges to the original policy when states get farther from the unsafe region. In this experiment, the optimal weight deviation is .

V Conclusion and Future Work

In this paper, we provided a framework for minimal layer-wise neural network repair. For a repair, we adjust a trained neural network to satisfy a set of predicates that impose constraints on the output of NN for a given set of inputs of interest. The experimental results demonstrated the success of our framework in repairing the trained networks to satisfy a set of predicates while maintaining the performance of the original network.

Inspired by our theoretical and experimental results, one is able to improve, change, and guarantee different aspects of performance of a trained neural network. While we showed the capability in our framework in addressing safety, model improvement, and verification problems in different robot learning applications, MIQP formulation of network repair is not an easy problem to solve for high-dimensional robotic systems with large input spaces. Our future work aims to address this problem by relaxing or decreasing the number of integer variables of our formulation. Additionally, we aim to investigate an efficient sampling mechanism for the network repair that can help to decrease the number of integer variables as well. In our experiments, we also observed that repairing the last layer did not necessarily satisfy a given property and, in some cases, it caused infeasibility. While we did not observe this issue in repairing the hidden layers, we aim to explore more cases where infeasibility occurs. Finally, we plan to create a public available neural network repair Python package integrated with TensorFlow model architectures

[37] in our future work.

References

  • [1] B. Riviere, W. Honig, Y. Yue, and S.-J. Chung, “Glas: Global-to-local safe autonomy synthesis for multi-robot motion planning with end-to-end learning,” vol. 5, no. 3, pp. 4249–4256.
  • [2] S. Zhou, M. J. Phielipp, J. A. Sefair, S. I. Walker, and H. B. Amor, “Clone swarms: Learning to predict and control multi-robot systems by imitation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
  • [3]

    T. Fan, P. Long, W. Liu, and J. Pan, “Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios,” vol. 39, no. 7, pp. 856–892.

  • [4] K. D. Julian, J. Lopez, J. S. Brush, M. P. Owen, and M. J. Kochenderfer, “Policy compression for aircraft collision avoidance systems,” in IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).
  • [5] Y. Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots, “Agile autonomous driving using end-to-end deep imitation learning,” in Robotics: Science and Systems.
  • [6]

    S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of deep learning applications to autonomous vehicle control,” vol. 22, no. 2, pp. 712–733.

  • [7] M. Strickland, G. Fainekos, and H. B. Amor, “Deep predictive models for collision risk assessment in autonomous driving,” in IEEE International Conference on Robotics and Automation (ICRA), 2018.
  • [8] Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated testing of deep-neural-network-driven autonomous cars,” in 40th International Conference on Software Engineering (ICSE), 2018.
  • [9]

    T. Dreossi, A. Donze, and S. A. Seshia, “Compositional falsification of cyber-physical systems with machine learning components,” vol. 63, p. 1031–1053, 2019.

  • [10] C. E. Tuncali, G. Fainekos, D. Prokhorov, H. Ito, and J. Kapinski, “Requirements-driven test generation for autonomous vehicles with machine learning components,” IEEE Transactions on Intelligent Vehicles, vol. 5, pp. 265–280, 2020.
  • [11] X. Sun, H. Khedr, and Y. Shoukry, “Formal verification of neural network controlled autonomous systems,” in 22nd ACM International Conference on Hybrid Systems: Computation and Control, 2019.
  • [12] R. Ivanov, T. J. Carpenter, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, “Verifying the safety of autonomous systems with neural network controllers,” vol. 20, no. 1.
  • [13] S. Dutta, S. Jha, S. Sankaranarayanan, and A. Tiwari, “Learning and verification of feedback control systems using feedforward neural networks,” in Analysis and Design of Hybrid Systems, 2018.
  • [14] V. Tjeng, K. Y. Xiao, and R. Tedrake, “Evaluating robustness of neural networks with mixed integer programming,” in 7th International Conference on Learning Representations (ICLR).
  • [15] S. Yaghoubi and G. Fainekos, “Worst-case satisfaction of stl specifications using feedforward neural network controllers: A lagrange multipliers approach,” ACM Transactions on Embedded Computing Systems, vol. 18, no. 5S, 2019.
  • [16] T. Dreossi, S. Jha, and S. A. Seshia, “Semantic adversarial deep learning,” arXiv:1804.07045v2, 2018.
  • [17] T. Dreossi, S. Ghosh, X. Yue, K. Keutzer, A. L. Sangiovanni-Vincentelli, and S. A. Seshia, “Counterexample-guided data augmentation,” in

    Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI

    , 2018, pp. 2071–2078.
  • [18] B. Goldberger, G. Katz, Y. Adi, and J. Keshet, “Minimal modifications of deep neural networks using verification.” in LPAR, vol. 2020, 2020, p. 23rd.
  • [19] G. Dong, J. Sun, J. Wang, X. Wang, T. Dai, and X. Wang, “Towards repairing neural networks correctly.”
  • [20] U. S. Cruz, J. Ferlez, and Y. Shoukry, “Safe-by-repair: A convex optimization approach for repairing unsafetwo-level lattice neural network controllers.”
  • [21] E. Balas, “Disjunctive programming,” Annals of discrete mathematics, vol. 5, pp. 3–51, 1979.
  • [22] C. Tsay, J. Kronqvist, A. Thebelt, and R. Misener, “Partition-based formulations for mixed-integer optimization of trained relu neural networks,” arXiv preprint arXiv:2102.04373, 2021.
  • [23] P. Belotti, L. Liberti, A. Lodi, G. Nannicini, A. Tramontani et al., “Disjunctive inequalities: applications and extensions,” Wiley Encyclopedia of Operations Research and Management Science, vol. 2, pp. 1441–1450, 2011.
  • [24] W. E. Hart, J.-P. Watson, and D. L. Woodruff, “Pyomo: modeling and solving mathematical programs in python,” Mathematical Programming Computation, vol. 3, no. 3, pp. 219–260, 2011.
  • [25] M. L. Bynum, G. A. Hackebeil, W. E. Hart, C. D. Laird, B. L. Nicholson, J. D. Siirola, J.-P. Watson, and D. L. Woodruff, Pyomo–optimization modeling in python, 3rd ed.   Springer Science & Business Media, 2021, vol. 67.
  • [26] Q. Chen, E. S. Johnson, J. D. Siirola, and I. E. Grossmann, “Pyomo. gdp: Disjunctive models in python,” in Computer Aided Chemical Engineering.   Elsevier, 2018, vol. 44, pp. 889–894.
  • [27] Gurobi Optimization, LLC, “Gurobi Optimizer Reference Manual,” 2021. [Online]. Available: https://www.gurobi.com
  • [28] A. Kazi and R. Bischoff, “From research to products: the kuka perspective on european research projects,” IEEE robotics & automation magazine, vol. 12, no. 3, pp. 78–84, 2005.
  • [29] V. Gupta, R. G. Chittawadigi, and S. K. Saha, “Roboanalyzer: robot visualization software for robot technicians,” in Proceedings of the Advances in Robotics, 2017, pp. 1–5.
  • [30] M. J. Kochenderfer and J. Chryssanthacopoulos, “Robust airborne collision avoidance through dynamic programming,” Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-371, vol. 130, 2011.
  • [31] K. D. Julian, J. Lopez, J. S. Brush, M. P. Owen, and M. J. Kochenderfer, “Policy compression for aircraft collision avoidance systems,” in 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).   IEEE, 2016, pp. 1–10.
  • [32] G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić et al., “The marabou framework for verification and analysis of deep neural networks,” in International Conference on Computer Aided Verification.   Springer, 2019, pp. 443–452.
  • [33] S. Yaghoubi, G. Fainekos, and S. Sankaranarayanan, “Training neural network controllers using control barrier functions in the presence of disturbances,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2020, pp. 1–6.
  • [34] A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada, “Control barrier functions: Theory and applications,” in 2019 18th European Control Conference (ECC).   IEEE, 2019, pp. 3420–3431.
  • [35] S. Yaghoubi, K. Majd, G. Fainekos, T. Yamaguchi, D. Prokhorov, and B. Hoxha, “Risk-bounded control using stochastic barrier functions,” IEEE Control Systems Letters, vol. 5, no. 5, pp. 1831–1836, 2020.
  • [36] K. Majd, S. Yaghoubi, T. Yamaguchi, B. Hoxha, D. Prokhorov, and G. Fainekos, “Safe navigation in human occupied environments using sampling and control barrier functions,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
  • [37] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/