Neural Lyapunov Control

05/01/2020 ∙ by Ya-Chien Chang, et al. ∙ University of California, San Diego 0

We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning-based methods hold the promise of solving hard nonlinear control problems in robotics. Most existing work focuses on learning control functions represented as neural networks through repeated interactions of an unknown environment in the framework of deep reinforcement learning, with notable success. However, there are still well-known issues that impede the immediate use of these methods in practical control applications, including sample complexity, interpretability, and safety 

Amodei et al. (2016). Our work investigates a different direction: Can learning methods be valuable even in the most classical setting of nonlinear control design? We focus on the challenging problem of designing feedback controllers for stabilizing nonlinear dynamical systems with provable guarantee. This problem captures the core difficulty of underactuated robotics Tedrake (2019)

. We demonstrate that neural networks and deep learning can find provably stable controllers in a direct way and tackle the full nonlinearity of the systems, and significantly outperform existing methods based on linear or polynomial approximations such as linear-quadratic regulators (LQR) 

Kwakernaak (1972) and sum-of-squares (SOS) and semidefinite programming (SDP) Parrilo (2000). The results show the promise of neural networks and deep learning in improving the solutions of many challenging problems in nonlinear control.

The prevalent way of stabilizing nonlinear dynamical systems is to linearize the system dynamics around an equilibrium, and formulate LQR problems to minimize deviation from the equilibrium. LQR methods compute a linear feedback control policy, with stability guarantee within a small neighborhood where linear approximation is accurate. However, the dependence on linearization produces extremely conservative systems, and it explains why agile robot locomotion is hard Tedrake (2019). To control nonlinear systems outside their linearizable regions, we need to rely on Lyapunov methods Haddad and Chellaboina (2008). Following the intuition that a dynamical system stabilizes when its energy decreases over time, Lyapunov methods construct a scalar field that can force stabilization. These fields are highly nonlinear and the need for function approximations has long been recognized Haddad and Chellaboina (2008). Many existing approaches rely on polynomial approximations of the dynamics and the search of sum-of-squares polynomials as Lyapunov functions through semidefinite programming (SDP) Parrilo (2000). A rich theory has been developed around the approach, but in practice the polynomial approximations pose much restriction on the systems and the Lyapunov landscape. Moreover, well-known numerical sensitivity issues in SDP Löfberg (2009) make it very hard to find solutions that fully satisfy the Lyapunov conditions. In contrast, we exploit the expressive power of neural networks, the convenience of gradient descent for learning, and the completeness of nonlinear constraint solving methods to provide full guarantee of Lyapunov conditions. We show that the combination of these techniques produces control designs that can stabilize various nonlinear systems with verified regions of attraction that are much larger than what can be obtained by existing control methods.

We propose an algorithmic framework for learning control functions and neural network Lyapunov functions for nonlinear systems without any local approximation of their dynamics. The framework consists of a learner and a falsifier. The learner uses stochastic gradient descent to find parameters in both a control function and a neural Lyapunov function, by iteratively minimizing the

Lyapunov risk which measures the violation of the Lyapunov conditions. The falsifier takes a control function and Lyapunov function from the learner, and searches for counterexample

state vectors that violate the Lyapunov conditions. The counterexamples are added to the training set for the next iteration of learning, generating an effective curriculum. The falsifier uses delta-complete constraint solving 

Gao et al. (2012), which guarantees that when no violation is found, the Lyapunov conditions are guaranteed to hold for all states in the verified domain. In this framework, the learner and falsifier are given tasks that are difficult in different ways and can not be achieved by the other side. Moreover, we show that the framework provides the flexibility for fine-tuning the control performance by directly enlarging the region of attraction on demand, by adding regulator terms in the learning cost.

We experimented with several challenging nonlinear control problems in robotics, such as drone landing, wheeled vehicle path following, and n-link planar robot balancingLiu and Xin (2015). We are able to find new control policies that produce certified region of attractions that are significantly larger than what can be established previously. We provide a detailed analysis of the performance comparison between the proposed methods and the LQR/SOS/SDP methods.

Related Work. The recent work of Richards et. al. Richards et al. (2018) has also proposed and shown the effectiveness of using neural networks to learn safety certificates in a Lyapunov framework, but our goals and approaches are different. Richards et. al. focus on discrete-time polynomial systems and the use of neural networks to learn the region of attraction of a given controller. The Lyapunov conditions are validated in relaxed forms through sampling. Special design of the neural architecture is required to compensate the lack of complete checking over all states. In comparison, we focus on learning the control and the Lyapunov function together with provable guarantee of stability in larger regions of attraction. Our approach directly handles non-polynomial continuous dynamical systems, does not assume control functions are given other than an initialization, and uses generic feed-forward network representations without manual design. Our approach successfully works on many more nonlinear systems, and find new control functions that enlarge regions of attraction obtainable from standard control methods. Related learning-based approaches for finding Lyapunov functions include Berkenkamp et al. (2016, 2017); Chow et al. (2018); Rasmussen and Williams (2006). There is strong evidence that linear control functions are all we need for solving highly nonlinear control problems through reinforcement learning as well Mania et al. (2018), suggesting convergence of different learning approaches. In the control and robotics community, similar learner-falsifier frameworks have been proposed by Ravanbakhsh and Sankaranarayanan (2019); Kapinski et al. (2014) without using neural network representations. The common assumption is the Lyapunov functions are high-degree polynomials. In these methods, an explicit control function and Lyapunov function can not be learned together because of the bilinear optimization problems that they generate. Our approach significantly simplifies the algorithms in this direction and has worked reliably on much harder control problems compared to existing methods. Several theoretical results on asymptotic Lyapunov stability Ahmadi et al. (2011); Ahmadi (2012); Ahmadi and Parrilo (2013); Ahmadi and Jungers (2015) show that some very simple dynamical systems do not admit a polynomial Lyapunov function of any degree, despite being globally asymptotically stable. Such results further motivates the use of neural networks as a more suitable function approximator. A large body of work in control uses SOS representations and SDP optimization in the search for Lyapunov functions Henrion and Garulli (2005); Parrilo (2000); Chesi and Henrion (2009); Jarvis-Wloszek et al. (2003); Majumdar and Tedrake (2017). However, scalability and numerical sensitivity issues have been the main challenge in practice. As is well known, the number of semidefinite programs from SOS decomposition grows quickly for low degree polynomials Parrilo (2000).

2 Preliminaries

We consider the problem of designing control functions to stablize a dynamical system at an equilibrium point. We make extensive use of the following results from Lyapunov stability theory.

Definition 1 (Controlled Dynamical Systems).

An -dimensional controlled dynamical system is


where is a Lipschitz-continuous vector field, and is an open set with that defines the state space of the system. Each is a state vector. The feedback control is defined by a continuous function , used as a component in the full dynamics .

Definition 2 (Asymptotic Stability).

We say that system of is stable at the origin if for any , there exists such that if then for all . The system is asymptotically stable at the origin if it is stable and also for all .

Definition 3 (Lie Derivatives).

The Lie derivative of a continuously differentiable scalar function over a vector field is defined as

It measures the rate of change of along the direction of the system dynamics.

Proposition 1 (Lyapunov Functions for Asymptotic Stability).

Consider a controlled system with equilibrium at the origin, i.e., . Suppose there exists a continuously differentiable function that satisfies the following conditions:


Then, the system is asymptotically stable at the origin and is called a Lyapunov function.

Linear-Quadratic Regulators (LQR) is a widely-adpoted optimal control strategy. LQR controllers are guaranteed to work within a small neighborhood around the stationary point where the dynamics can be approximated as linear systems. A detailed description can be found in Kwakernaak (1972).

3 Learning to Stabilize with Neural Lyapunov Functions

We now describe how to learn both a control function and a neural Lyapunov function together, so that the Lyapunov conditions can be rigorously verified to ensure stability of the system. We provide pseudocode of the algorithm in Algorithm 1.

3.1 Control and Lyapunov Function Learning

We design the hypothesis class of candidate Lyapunov functions to be multilayered feedforward networks with activation functions. It is important to note that unlike most other deep learning applications, we can not use non-smooth networks, such as with ReLU activations. This is because we will need to analytically determine whether the Lyapunov conditions hold for these neural networks, which requires the existence of their Lie derivatives.

For a neural network Lyapunov function, its input is any state vector of the system in Definition (1) and the output is a scalar value. We write to denote the parameter vector for a Lyapunov function candidate . For notational convenience, we write to denote both the control function and the parameters that define the function. The learning process updates both the and parameters to improve the likelihood of satisfying the Lyapunov conditions, which we formulate as a cost function named the Lyapunov risk. The Lyapunov risk measures the degree of violation of the following Lyapunov conditions, as shown in Proposition (1). First, the value of is positive; Second, the value of the Lie derivative is negative; Third, the value of is zero. Conceptually, the overall Lyapunov control design problem is about minimizing the minimax cost of the form

The difficulty in control design problems is that the violation of the Lyapunov conditions can not just be estimated, but needs to be fully guaranteed over all states in

. Thus, we need to rely on global search with complete guarantee for the inner maximization part, which we delegate to the falsifier explained in Section 3.2. For the learning step, we define the following Lyapunov risk function.

Definition 4 (Lyapunov Risk).

Consider a candidate Lyapunov function for a controlled dynamical system from Definition 1. The Lyapunov risk is a defined by the following function



is a random variable over the state space of the system with a distribution

. In practice, we work with the Monte Carlo estimate named the empirical Lyapunov risk by drawing samples:


where are samples of the state vectors sampled according to .

It is clear that the empirical Lyapunov risk is an unbiased estimator of the Lyapunov risk function. It is clear that

is an unbiased estimator of .

Note that is positive semidefinite, and any that corresponds to a true Lyapunov function satisfies =0. Thus, Lyapunov functions define global minimizers of the Lyapunov risk function.

Proposition 2.

Let be a Lyapunov function for dynamical system where is the control parameters. Then is a global minimizer for and .

Note that both and are highly nonlinear (even though is almost always linear in practice), and thus generates a highly complex landscape. Surprisingly, multilayer feedforward networks and stochastic gradient descent can quickly produce generalizable Lyapunov functions with nice geometric properties, as we report in detail in the experiments. In Figure 1 (b), we show an example of how the Lyapunov risk is minimized over iterations on the inverted pendulum example.

Initialization and improvement of control performance over LQR.

Because of the local nature of stochastic gradient descent, it is hard to learn good control functions through random initialization of control parameters. Instead, the parameters in the control function are initialized to the LQR solution, obtained for the linearized dynamics in a small neighborhood around the stationary point. On the other hand, the initialization of the neural network Lyapunov functions can be completely random. We observe that the final learned controller often delivers significantly better control solutions than the initalization from LQR. Figure 1(a) shows how the learned control reduces oscillation of the system behavior in the 2-link planar robot balancing example and achieve more stable control.

Figure 1:

(a) Comparison between LQR and deep-learned controllers for 2-link planar robot balancing. (b) The Lyapunov risk decreases quickly over iterations. (c) Counterexamples returned by falsifiers from several epochs, which quickly guides the learner to focus on sepcial regions in the space.

3.2 Falsification and Counterexample Generation

For each control and Lyapunov function pair that the learner obtains, the falsifier’s task is to find states that violate the Lyapunov conditions in Proposition 1. We formulate the negations of the Lyapunov conditions as a nonlinear constraint solving problem over real numbers. These falsification constraints are defined as follows.

Definition 5 (Lyapunov Falsification Constraints).

Let be a candidate Lyapunov function for a dynamical system defined by defined in state space . Let be a small constant parameter that bounds the tolerable numerical error. The Lyapunov falsification constraint is the following first-order logic formula over real numbers:

where is bounded in the state space of the system. The numerical error parameter is explicitly introduced for controlling numerical sensitivity near the origin. Here is orders of magnitude smaller than the range of the state variables, i.e., .

Remark 1.

The numerical error parameter allows us to avoid pathological problems in numerical algorithms such as arithmetic underflow. Values inside this tiny ball correspond to disturbances that are physically insignificant. This parameter is important for eliminating from our framework the numerical sensitivity issues commonly observed in SOS/SDP methods. Also note the -ball does not affect properties of the Lyapunov level sets and regions of attraction outside it (i.e., ).

The falsifier computes solutions of the falsification constraint . Solving the constraints requires global minimization of a highly nonconvex functions (involving Lie derivatives of the neural network Lyapunov function), and it is a computationally expensive task (NP-hard). We rely on recent progress in nonlinear constraint solving in SMT solvers such as dReal Gao et al. (2012), which has been used for similar control design problems Kapinski et al. (2014) that do not involve neural networks.

Example 1.

Consider a candidate Lyapunov function and dynamics and . The falsification constraint is of the following form

which is a nonlinear non-polynomial disjunctive constraint system. The actual examples used in our experiments all use larger two-layer networks and much more complex dynamics.

To completely certify the Lyapunov conditions, the constraint solving step for can never fail to report solutions if there is any. This requirement is rigorously proved for algorithms in SMT solvers such as dReal Gao et al. (2013), as a property called delta-completeness Gao et al. (2012).

Definition 6 (Delta-Complete Algorithms).

Let be a class of quantifier-free first-order constraints. Let be a fixed constant. We say an algorithm is -complete for , if for any , always returns one of the following answers correctly: does not have a solution (unsatisfiable), or there is a solution that satisfies . Here, is defined as a small syntactic variation of the original constraint (precise definitions are in Gao et al. (2012)).

In other words, if a delta-complete algorithm concludes that a formula is unsatisfiable, then it is guaranteed to not have any solution. In our context, this is exactly what we need for ensuring that the Lyapunov condition holds over all state vectors. If is determined to be -satisfiable, we obtain counterexamples that are added to the training set for the learner. Note that the counterexamples are simply state vectors without labels, and their Lyapunov risk will be determined by the learner, not the falsifier. Thus, although it is possible to have spurious counterexamples due to the error, they are used as extra samples and do not harm correctness of the end result. In all, when delta-complete algorithms in dReal return that the falsification constraints are unsatisfiable, we conclude that the Lyapunov conditions are satisfied by the candidate Lyapunov and control functions. Figure  1(c) shows a sequence of counterexamples found by the falsifier to improve the learned results.

Remark 2.

When solving with -complete constraint solving algorithms, we use to reduce the number of spurious counterexamples. Following delta-completeness, the choice of does not affect the guarantee for the validation of the Lyapunov conditions.

3.3 Tuning Region of Attraction

An important feature of the proposed learning framework is that we can adjust the cost functions to learn control and Lyapunov functions favoring various additional properties. In fact, the most practically important performance metric for a stabilizing controller is its region of attraction (ROA). An ROA defines a forward invariant set that is guaranteed to contain all possible trajectories of the system, and thus can conclusively prove safety properties. Note that the Lyapunov conditions themselves do not directly ensure safety, because the system can deviate arbitrarily far before coming back to the stable equilibrium. Formally, the ROA of an asymptotically stable system is defined as:

Definition 7 (Region of Attraction).

Let define a system asymptotically stable at the origin with Lyapunov function for domain . A region of attraction is a subset of that contains the origin and guarantees that the system never leaves . Any level set of completely contained in defines an ROA. That is, for , if , then is an ROA for the system.

To maximize the ROA produced by a pair of Lyapunov function and control function, we add a cost term to the Lyapunov risk that regulates how quickly the Lyapunov function value increases with respect to the radius of the level sets, by using following Definition 4. Here is tunable parameter. We observe that the regulator can have major effect on the performance of the learned control functions. Figure 2 illustrates such an example, showing how different control functions are obtained by regulating the Lyapunov risk to achieve larger ROA.

Figure 2: (a) Lyapunov function found by the initial LQR controller. (b) Lyapunov function found by learning without tuning the ROA. (c) Lyapunov function found by learning after adding the ROA tuning term. (d) Comparison of ROA for the different Lyapunov functions.
1:function Learning()
2:     Set learning rate , input dimension , output dimension
3:     Initialize feedback controller to LQR solution
4:     Repeat:
5:      Forward pass of neural network
7:     Compute Lyapunov risk
9:      Update weights using SGD
10:     Until convergence
11:     return
12:end function
13:function Falsification()
14:     Encode conditions in Definition 5
15:     Using SMT solver with to verify the conditions
16:     return satisfiability
17:end function
18:function Main( )
19:     Input: dynamical system , parameters of LQR , radius , precision and an initial set of randomly sampled states in
20:     while Satisfiable do
21:         Add counterexamples to
22:          Learning()
23:         CE Falsification()
24:     end while
25:end function
Algorithm 1 Neural Lyapunov Control

4 Experiments

We demonstrate that the proposed methods find provably stable control and Lyapunov functions on various nonlinear robot control problems. In all the examples, we use a learning rate of for the learner, an value of and value of for the falsifier, and re-verify the result with smaller in Table 1. We emphasize that the choices of these parameters do not affect the stability guarantees on the final design of the control and Lyapunov functions. We show that the region of attraction is enlarged by 300% to 600% compared to LQR results in these examples. Full details of the results and system dynamics are provided in the Appendix. Note that for the Caltech ducted fan and n-link balancing examples, we numerically relaxed the conditions slightly when the learning has converged, so that the SMT solver dReal does not run into numerical issues. More details on the effect of such relaxation can be found in the paper website Chang et al. (2019).

Benchmarks Learning time falsification time # samples # iterations
Inverted Pendulum 25.5 0.6 500 430 0.04
Path Following 36.3 0.2 500 610 0.01
Caltech Ducted Fan 1455.16 50.84 1000 3000 0.01
2-Link Balancing 6000 458.27 1000 4000 0.01
Table 1: Runtime statistics of the full procedures on four nonlinear control examples.

Inverted pendulum.

The inverted pendulum is a standard nonlinear control problem for testing different control methods. This system has two state variables, the angular position , angular velocity and one control input . Our learning procedure finds a neural Lyapunov function that is proved to be valid within the domain . In contrast, the ROA found by SOS/SDP techniques is an ellipse with large diameter of and short diameter of . Using LQR control on the linearized dynamics, we obtain an ellipse with large diameter of and short diameter of . We observe that among all the examples in our experiments, this is the only one where the SOS Lyapunov function has passed the complete check by the constraint solver, so that we can compare to it. The Lyapunov function obtained by LQR gives a larger ROA if we ignore the linearization error. The different regions of attractions are shown in Figure 3. These values are consistent with the approximate maximum region of attraction reported in Richards et al. (2018). In particular, Figure 3 (c) shows that the SOS function does not define a big enough ROA, as many trajectories escape its region.

Figure 3: Results of Lyapunov functions for inverted pendulum. (a) Lie derivative of learned Lyapunov function over valid region. Its value is negative over the valid region, satisfying the Lyapunov conditions. (b) ROA estimated by different Lyapunov functions. Our method enlarges the ROA from LQR three times. (c) Validation of ROAs. Stars represent initial states. It shows trajectories start near border of the ROA defined by the learned neural Lyapunov function are safely bounded within the green region. On the contrary, many trajectories (red) starting inside the SOS region can escape, and thus the region fails to satisfy the ROA properties.

Caltech ducted fan in hover mode. The system describes the motion of a landing aircraft in hover mode with two forces and . The state variables , , denote the position and orientation of the centre of the fan. There are six state variables . The dynamics, neural Lyapunov function with two layers of activation functions, and the control policy are given in the Appendix. In Figure 4, we show that the ROA is significantly larger than what can be obtained from LQR.

Figure 4: (a) Comparison of ROAs for Caltech ducted fan. (b) Comparison of ROAs for path following. (c) Schematic diagram of wheeled vehicle to show the nonlinear dynamics.

Wheeled vehicle path following.

We consider the path tracking control using kinematic bicycle model (see Figure 4). We take the angle error and the distance error as state variables. Assume a target path is a unit circle, then we obtain the Lyapunov function within .

N-Link Planar Robot Balancing.

The -link pendulum system has control inputs and state variables , representing the link angles and angle velocities. Each link has mass and length

, and the moments of inertia

are computed from the link pivots, where . We find a neural Lyapunov function for the 2-link pendulum system within . In Figure 5, we show the shape of the neural Lyapunov functions on two of the dimensions, and the ROA that the control design achieves. We also provide a video of the control on the 3-link model.

Figure 5: Results of n-link planar robot balancing. (a) Schematic diagram. (b) Learned Lyapunov function. (c) Lie derivative of Lyapunov function. (d) Comparison of the region of attraction.

5 Conclusion

We proposed new methods to learn control policies and neural network Lyapunov functions for highly nonlinear systems with provable guarantee of stability. The approach significantly simplifies the process of nonlinear control design, provides end-to-end provable correctness guarantee, and can obtain much larger regions of attraction compared to existing control methods. We show experiments on challenging nonlinear problems central to various nonlinear control problems. The proposed methods demonstrate clear advantage over existing methods. We envision that neural networks and deep learning will lead to better solutions to core problems in robot control design.


  • [1] A. A. Ahmadi and R. M. Jungers (2015) Lower bounds on complexity of lyapunov functions for switched linear systems. CoRR abs/1504.03761. Cited by: §1.
  • [2] A. A. Ahmadi, M. Krstic, and P. A. Parrilo (2011) a globally asymptotically stable polynomial vector field with no polynomial lyapunov function. In 2011 50th IEEE Conference on Decision and Control and European Control Conference, Cited by: §1.
  • [3] A. A. Ahmadi and P. A. Parrilo (2013) Stability of polynomial differential equations: complexity and converse lyapunov questions. CoRR abs/1308.6833. External Links: 1308.6833 Cited by: §1.
  • [4] A. A. Ahmadi (2012) On the difficulty of deciding asymptotic stability of cubic homogeneous vector fields. In American Control Conference, ACC 2012, Montreal, QC, Canada, June 27-29, 2012, pp. 3334–3339. Cited by: §1.
  • [5] D. Amodei, C. Olah, J. Steinhardt, P. F. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in AI safety. CoRR abs/1606.06565. Cited by: §1.
  • [6] F. Berkenkamp, R. Moriconi, A. P. Schoellig, and A. Krause (2016-12) Safe learning of regions of attraction for uncertain, nonlinear systems with gaussian processes. In 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 4661–4666. Cited by: §1.
  • [7] F. Berkenkamp, M. Turchetta, A. Schoellig, and A. Krause (2017) Safe model-based reinforcement learning with stability guarantees. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 908–918. Cited by: §1.
  • [8] Y. Chang, N. Roohi, and S. Gao (2019) Neural Lyapunov control (project website), Cited by: §4.
  • [9] G. Chesi and D. Henrion (2009-05) Guest editorial: special issue on positive polynomials in control. IEEE Transactions on Automatic Control 54 (5), pp. 935–936. Cited by: §1.
  • [10] Y. Chow, O. Nachum, E. Duenez-Guzman, and M. Ghavamzadeh (2018) A lyapunov-based approach to safe reinforcement learning. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 8092–8101. Cited by: §1.
  • [11] S. Gao, J. Avigad, and E. M. Clarke (2012) Delta-Complete decision procedures for satisfiability over the reals. In Automated Reasoning - 6th International Joint Conference, IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings, pp. 286–300. External Links: Link, Document Cited by: §1, §3.2, §3.2, Definition 6.
  • [12] S. Gao, S. Kong, and E. M. Clarke (2013) DReal: an SMT solver for nonlinear theories over the reals. In Automated Deduction - CADE-24 - 24th International Conference on Automated Deduction, Lake Placid, NY, USA, June 9-14, 2013. Proceedings, pp. 208–214. Cited by: §3.2.
  • [13] W. Haddad and V. Chellaboina (2008-01) Nonlinear dynamical systems and control: a lyapunov-based approach. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach, pp. . Cited by: §1.
  • [14] D. Henrion and A. Garulli (2005) Positive polynomials in control. Lecture Notes in Control and Information Sciences, Vol. 312, Springer Berlin Heidelberg. Cited by: §1.
  • [15] Z. Jarvis-Wloszek, R. Feeley, Weehong Tan, Kunpeng Sun, and A. Packard (2003-12) Some controls applications of sum of squares programming. In 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Vol. 5, pp. 4676–4681 Vol.5. Cited by: §1.
  • [16] J. Kapinski, J. V. Deshmukh, S. Sankaranarayanan, and N. Arechiga (2014) Simulation-guided lyapunov analysis for hybrid dynamical systems. In Proceedings of the 17th International Conference on Hybrid Systems: Computation and Control, HSCC ’14, pp. 133–142. External Links: ISBN 978-1-4503-2732-9 Cited by: §1, §3.2.
  • [17] H. Kwakernaak (1972) Linear optimal control systems. John Wiley & Sons, Inc., New York, NY, USA. External Links: ISBN 0471511102 Cited by: §1, §2.
  • [18] Y. Liu and X. Xin (2015-12) Controllability and observability of n-link planar robot with a single actuator having different actuator-sensor configurations. IEEE Transactions on Automatic Control 61, pp. 1–1. External Links: Document Cited by: §1.
  • [19] J. Löfberg (2009) Pre- and post-processing sum-of-squares programs in practice. IEEE Transactions on Automatic Control 54 (5), pp. 1007–1011. Cited by: §1.
  • [20] A. Majumdar and R. Tedrake (2017) Funnel libraries for real-time robust feedback motion planning. The International Journal of Robotics Research 36 (8), pp. 947–982. Cited by: §1.
  • [21] H. Mania, A. Guy, and B. Recht (2018) Simple random search of static linear policies is competitive for reinforcement learning. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 1805–1814. Cited by: §1.
  • [22] P. A. Parrilo (2000) Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. Ph.D. Thesis, California Institute of Technology. Cited by: §1, §1, §1.
  • [23] C.E. Rasmussen and C.K.I. Williams (2006)

    Gaussian processes for machine learning

    Adaptative computation and machine learning series, University Press Group Limited. Cited by: §1.
  • [24] H. Ravanbakhsh and S. Sankaranarayanan (2019) Learning control lyapunov functions from counterexamples and demonstrations. Autonomous Robots 43 (2), pp. 275–307. Cited by: §1.
  • [25] S. M. Richards, F. Berkenkamp, and A. Krause (2018-29–31 Oct) The lyapunov neural network: adaptive stability certification for safe learning of dynamical systems. In Proceedings of The 2nd Conference on Robot Learning, Proceedings of Machine Learning Research, Vol. 87, , pp. 466–476. Cited by: §1, §4.
  • [26] R. Tedrake (2019) Underactuated robotics: algorithms for walking, running, swimming, flying, and manipulation (course notes for mit 6.832).. Cited by: §1, §1.