0.1 Model Predictive Control and Verification Challenges
When one wants to control the behavior of a physical device, one could rely on the use of a feedback controller, executed on a computer, to perform the necessary adjustements to the device to maintain its state or reach a given target. Classical means of this control theory amount to express the device behavior as a linear ordinary differencial equation (ODE) and define the feedback controller as a linear system; eg. a PID controller. The design phase searches for proper gains, ie. parametrization, of the controller to achieve the desired behavior.
While this approach has been used for years with great success, eg. in aircraft control, some more challenging behaviors or complex devices need more sophisticated controllers. Assuming that the device behavior is known, one can predict its future states. A first approach, Optimal Control with indirectmethod, search for optimal solutions solving a complex mathematical problem, the Pontryagin Maximal Principle. This is typical used to compute rocket or satellite trajectories. However this approach, while theoretically optimal, requires complex computation and cannot yet be performed online, in realtime. A second approach, Linear Model Predictive Control or direct method, amounts to solve online a convex optimization problem describing the current state, the target and the problem constraints, ie. limits on the thrust. A pregnant example of such trajectory computation is the landing of SpaceX orbital rockets [1].
The use of a convex encoding of the problem guarantees the absence of saddle points (local minimums) and could be resolved efficiently with polynomialtime algorithms. Convex optimization covers a large set of convexconic problems, from linear programming (LP: linear constraints, linear cost) to quadratic programming (QP: linear contraints, quadratic cost) or semidefinite programming (SDP: linear constraints over matrices, linear cost). While the famous Simplex algorithm can efficiently address LP ones, the Interior Point Method (IPM) is the stateofthe art one when it comes to more advanced cones (eg. QP, SDP).
Linear Model Predictive control can then be expressed as a bounded modelchecking problem with an additional cost to find the best solution, using convex optimization:
Definition 1 (Example: LP encoding of MPC).
Let and be constrained convex set for inputs and states of the controller. Let be an bounded sequence of control inputs for a linear system with
a vector of state variables,
and . Let the initial state and the target one. The objective is to compute an optimal trajectory, for example minimizing the required inputs to reach the target point . Let us define the following LP problem:(1) 
Let us remark that Eq. (1) relies on a linear and discrete description (through matrices and ) of the original device behavior, typically a non linear ODE. This LP problem has variables since and are known. Without loss of generality, one can express Eq. (1) over fresh definitions of , and as the following LP:
(2) 
with constraints, and where denotes here a larger state. Equality constraints are defined as .
This computation is embedded on the device, and, depending on the starting point , computes the sequence of inputs to reach the final point . Since most of the parameters are known a priori the usual approach is to instanciate an optimization algorithm to the specific problem rather than embed on the device a generic solver. For example SpaceX rockets rely on CVX [2] to produce custom code, compatible with embedded devices (eg. no dynamic allocation, reduced number of computations).
While classical linear control only amounts to the computation of a linear update, these MPC approaches rely on more involved online computation: the convex optimization has to return a sound solution in finite time.
The objective of this paper is to address the verification of these IPM implementations to ensure the good behavior of MPCbased ccontrollers. A possible approach could have been to specify the algorithm in a proof assistant and extract a correctbyconstruction code. Unfortunately code extraction from these proof assistants generates source code that hardly resembles classical critical embedded source code. We rather chose to synthesize the final code while producing automatically code contracts and additional annotations and lemmas to support the automatic validation of the code. Our contributions are then following:

Our proof process is automatic: instead of proving a generic version of the algorithm, which would require strong assumptions on the problem parameters, we perform an automatic instancespecific proof, achieving a complete validation without any user input: all proofs are performed, at code level, using SMTsolvers only.

The approach has been evaluated on a set of generated instances of various sizes, evaluating the scalability of the proof process. The generated code can then be embedded in actual devices.
The considered setting is a primal IPM solving a LP problem. In the presented work, we focus only on the algorithmic part while dealing with the actual implementation, therefore numerical issues such as floating point computation are here neglected.
This work is the first approach addressing the formal verification of an IPM convex algorithm, as used in MPCbased controllers.
This paper is structured as follow. We introduce the reader to convex optimization and IPM in Sec. 0.2 and we describe the code and specification generation in Sec. 0.3. Sec. 0.4 focuses on the proof process supporting the automatic proof of generated code specification. Sec. 0.5 provides a feedback on our approach evaluation. Related works are then presented in Sec. 0.6.
0.2 Convex Optimization with the Interior Point Method
Let us outline the key definition and principles behind the primal IPM we analyze. As mentioned above, these notions easily extend to more sophisticated cones. We chose a primal algorithm for its simplicity compared to primal/dual algorithms and we followed Nesterov’s proof [7].
Definition 2 (LP solution).
Let us consider the problem of Eq. (2) and assume that an optimal point exists and is unique. We have then the following definitions:
(3)  
(cost function)  (4)  
(optimal point)  (5) 
In order to ensure the existence of an optimal value, we assume the set to be bounded.
Barrier function and analytic center.
One can describe the interior of the feasible set using a penalty function . Figure 1 depicts such a logarithmic barrier function encoding a set of linear constraints, it diverges when approaches the border of . By construction, it admits a unique minimum, called the analytic center, in the interior of the feasible set.
Central path
A minimum of a function is obtain by analyzing the zeros of its gradients; for convex function this minimum is unique. In case of a cost function with constraints, IPM represents these constraints within the cost function using the function . The variable balances the impact of the barrier: when , is independent from the objective while when , the cost gains more weight.
Definition 3 (Adjusted cost .).
Let be a linear combination of the previous objective function and the barrier function .Definition 4 (Central path: from analytic center to optimal solution.).
The values of minimizing when varies from to characterize a path, the central path (cf. Fig. 2).
(7)  
Note that is the analytic center while .
IPM algorithm: a sequence of Newton steps following the central path.
IPM algorithm performs a sequence of iterations, updating a point that follows the central path and eventually reaches the optimal point. At the beginning of each loop iteration, there exists a real such that . Then is increased by and is the new point . This translation is performed by a Newton step.
Computing using Newton step.
We recall that the Newton’s method computes an approximation of a root of a function . It is a first order method, ie. it relies on the gradient of the function and, from a point in the domain of the neighborhood of a root, performs a sequence of iterations, called Newton steps. A Newton step transforms a point into as follows:
(8) 
We apply the Newton step to the gradient of , computing its root which coincides with the minimum of . We obtain:
(9) 
Computing dt: preserving the Approximate Centering Condition (ACC).
The convergence of the Newton method is guaranteed only in the neighborhood to the function root. This neighborhood is called the region of quadratic convergence; this region evolves on each iteration since varies. To guarantee that the iterate remains in the region after each iteration, we require the barrier function to be selfconcordant. Without providing the complete definition of selfcondordance, let us focus on the implied properties: assuming that is a selfconcordant, then , its Hessian, is nondegenerate([7, Th4.1.3]) and we can define a local norm.
Definition 5 (Localnorm).
(10) 
This localnorm allows one to define the Approximate Centering Condition (ACC), the crucial property which guarantees that remains in the region of quadratic convergence:
Definition 6 (Acc).
Let and , is a predicate defined by
(11) 
(12) 
The only step remaining is the computation of the largest such that remains in the region of quadratic convergence around , with a constant:
(13) 
Theorem 1 (ACC preserved).
This choice maintains the ACC at each iteration([7, Th4.2.8]). When and then .
Summary of IPM
Thanks to an appropriate barrier function to describe the feasible set, IPM algorithm starts from the point , the analytic center, and . Then its updates both variables using Eqs. (9) and (13), until the required precision is reached.
Remark 1 (Choice of a barrier function.).
For this work, we use the classic selfconcordant barrier for linear programs:
(14) 
with the columns of .
Remark 2 (Computation of the analytic center.).
is required to initiate the algorithm. In case of offline use the value could be precomputed and validated. However in case of online use, its computation itself has to be proved. Fortunately this can be done by a similar algorithm with comparable proofs. In addition, in MPCbased controllers, the set of constraints can be fed with previously computed values, guaranteeing the existence of a nonempty interior and provding a feasible point to compute this new analytic center.
0.3 Generating Code and Formal Specification
Typical uses of MPC do not rely on a generic solver implementing an IPM algorithm with a large set of parameters but rather instanciate the algorithm to the provided instance. As mentioned in the introduction, a large subset of the problem is known beforehand and therefore lots of computation can be hardcoded. As an example, the computation of the local norm relies mainly on the computation of the Hessian of the barrier function and can be, in some cases, precomputed.
The code generation is therefore very similar from one instance to the other. The main changes lie in the sizes of the various arrays representing the problem and the points along the central path.
Figure 3 sketches our fully automatic process which, when provided with a convex problem with some unknown values, generates the code, the associated annotation and prove it automatically. We are not going to present all the process in this paper but concentrate on how to write the embedded code, annotate it and automatize its proof. Early stages of the process reformulate the given instance in a canoncial form. Regarding the code generation, our approach is really similar to the CVXGEN or CVXOPT tools.
0.3.1 Code structure: Newton steps in a ForLoop
The generated code follows strictly the algorithm presented in the previous section for each iteration steps. However, usual implementations perform the iterations in a whileloop until a condition is satisfied, where represents the position on the central path at iteration .
In order to guarantee termination, we rely on the convergence proof and complexity analysis of [7, Th 4.2.9] and compute, a priori, the required number of iterations to reach a given optimality . The termination proof relies on the characterization of a a geometric progression such that each increment is sufficient to achieve some progress towards termination: at th iteration, we have
(15) 
In terms of practical impact the code becomes a for loop with a provided number of iterations while the Lower progression only appears in the formal specification part and is used for proof purposes.
0.3.2 Domainspecific axiomatics
We rely on the tool FramaC [5] to perform the proofs at code level and on ACSL [8], its annotation language, to formally express the specification. While extensible, ACSL does not provide high level constructs regarding linear algebra or optimzation related properties. We first defined new ACSL axiomatics: specific sets of types, operators and axioms to express these. Similar approaches were already proposed [9] for matrices but were too specific to ellipsoid problems. We give here an overview of the definitions of both the Matrix and Optim ACSL axiomatics, both presented in Fig. 4. Note that these definition are not instancespecific and are shared among all uses of our framework.
Axiomatic are defined as algebraic specifications. Logical opererators manipulating the local types can either be defined as functions or as uninterpreted functions fitted with axioms.
Matrix axiomatic.
The new ACSL type LMat, ie. Logic Matrix, is introduced and fitted with getters and constructors. Constructors such as MatVar allow to bind fresh LMat value. This one reads a C array while others create constant values. Operators such as matrix addition or inverse are defined axiomatically: the operator is defined and fitted with axioms.
Optimization axiomatic.
Besides generic matrix operators, we also need some operators specific to our algorithm. The Newton step defined in Eq. (9) as well as the local norm of Eq. (10) are both based on the gradient and the Hessian of the barrier function (Eq. (14)) encoding the feasible set. These functions are parametrized by the matrix , the vector and the local point . However Hessian and gradient are hard to define without real analysis which is well beyond the scope of this article. Therefore we decided to directly axiomatize some theorems relying on their definition like [7, Th4.1.14].
0.3.3 Functions and contracts
Multiple functions to support proof and local contrats.
Proving large functions is usually hard with SMTbased reasoning since the generated goals are too complex to be discharged automatically. A more efficient approach is to associate small pieces of code with local contracts; these intermediate annotations acting as cutrules in the proof processes.
Let A = B[C] be a piece of code containing C. Replacing C by a call to f() { C } requires either to inline the call or to write a new contract f() , characterizing two smaller goals instead of a larger one. Specifically in the proof of a B[f()], C has been replaced by and which is simpler than an automatically computed weakest precondition.
Therefore instead of having one large function, our code is structured into several functions. As an example, each basic matrix operation is performed in a dedicated function. The associated contract provides a high level interpretation of the function behavior expressed over matrices abstract datatypes. This modular encoding supports the generation of simpler goals for the SMT solver, while keeping the code structure clearer.
Specification of pathfollowing.
A sound algorithm must produce a point in the feasible set such that its cost is close to . This represents the functional specification that we expect from the code and is asserted by two postconditions. In addition, two preconditions state that is feasible and close enough to the analytic center.
Thanks to our two new theories Matrix and Optim, writing and reading this contract is straightforward and can be checked by anyone familiar with linear programming. Our contribution focuses on the expression on intermediate annotations, spread over the code, to support the overall proof of specifications.
A loop needs to be annotated by an invariant to have its Weakest precondition computed. We need three invariants for our pathfollowing algorithm. The first one guarantees the feasibility of while the second one states the conservation of the (cf. Def. 6) The third invariant assert that is increasing enough on each iteration, more specifically that it is greater than a geometric progression(15).
Proving the initialization is straightforward, thanks to the main preconditions. We wrote one ACSL lemma for each invariant. Whenever it is possible we try to follow [7], for example by translating [7, Th4.1.5] and Theorem 1. The last two loop invariants are combined to prove the second postcondition of pathfollowing thanks to Theorem 2. NBR is computed from .
Theorem 2.
[7, Th4.2.7] Let , and such that then
(17) 
Fig. 7 presents the specification in ACSL: the main function contract and the loop invariants.
Loop body.
Each iteration of the IPM relies on three function calls: update_pre computing some common values, update_t and update_x (cf Fig. 6). Theorem 1 is then split into several properties and the corresponding postconditions. For example, the contract of update_t is described in Fig. 8.
The first postcondition is an intermediary results stating that:
(18) 
This result is used as precondition for update_x. The second ensures corresponds to the product of by the common ratio of the geometric progression cf. Eq. (15) which will be used to prove the second invariant of the loop. The first precondition is a postcondition from update_pre and the second one is the first loop invariant.
0.4 Automatic Verification: Proof Refinement through Lemma Definitions and Local Contracts.
Our framework produces both the code and its annotations, as well as a set of axioms related to our newly defined axiomatics (cf. Fig. 4). However most of the contracts cannot be proved automatically by an SMT solver. After a first phase in which we tried to perform these proofs with Coq, we searched for more generic means to automatize the proof.
0.4.1 Refining the proofs.
We performed the following steps in order to identify the appropriate set of additional assertions, local contracts or lemma to be introduce:

we took a fully defined custom code for a linear problem and study means to prove its validity: a feasible, optimal solution, while guarantying the convergence of the algorithm.

these proofs were then manually generalized through the introduction of numerous intermediate lemmas as annotations in the code. These additional annotations enable the automatic proof of the algorithm using an offtheshelf SMT solver AltErgo [10].

the custom code generation was extended to produce both the actual code and these annotations and function contracts, enabling both the generation of the embedded code and its proof of functional soundness.
Another approach is to refine the code into smaller components, as presented in Fig. 6. The encoding hides lowlevel operations to the rest of the code, leading to two kinds of goals:

Low level operation (memory and basic matrix operation).

High level operation (mathematics on matrices).
0.4.2 Simplifying the code: addressing memoryrelated issues.
One of the difficulties when analyzing C code are memory related issues. Two different pointers can alias and reference the same part of the stack. A fine and complex modeling of the memory in the predicate encoding, such as separation logic[11] could address these issues. Another more pragmatic approach amounts to substitute all local variables and function arguments with global static variables. Two static arrays cannot overlap since their memory is allocated at compile time.
Since we are targeting embedded system, static variables will also permit to compute and reduce memory footprints. However there are two major drawbacks: the code is less readable and variables are accessible from any function. These two points usually lead the programmer to mistakes but could be accepted in case of code generation. In order to support this we also tagged all variables with the function they belong to by prefixing each variable with its function name.
0.4.3 Lowlevel goals: Proving Matrix operations.
As explained in previous Section, the matrix computation are encapsulated into smaller functions. Their contract states the equality between the resulting matrix and the operation computed. The extensionality axiom (mat_eq_def) is required to prove this kind of contract. Extensionality means that if two objects have the same external properties then they are equal.
This axiom belong to the matrix axiomatic but is too general to be used therefore lemmas specific to the matrices size are added for each matrix affectation. This lemma can be proven with the previous axioms and therefore does not introduce more assumption.
The proof remains difficult or hardly automatic for SMT solvers therefore we append additional assertions, as sketched in Figure 9, at the end of function stating all the hypothesis of the extensionality lemma. Proving these postconditions is straightforward and smaller goals need now to be proven.
Further split of functions into smaller functions is performed when functions become larger. As an example, for operations on matrices of size , there is lines of C code manipulating arrays and logic. To avoid large goals involving array accesses for SMT solvers when becomes greater than , each operation is encapsulated as a separate function annotated by the operation on a single element:
These generated goals are small enough for SMT. The postcondition of the matrix operation remains large but the array accesses are abstracted away by all the function call and therefore becomes provable by SMT solvers.
0.4.4 Highlevel goals: IPM specific properties.
Proving sophisticated lemmas with non trivial proof is hardly feasible for an SMT solver. To support them, we introduced many intermediate lemmas. At each proof step (variable expansion, commutation, factorization, lemma instantiation, …) a new lemma is automatically introduced. With this method, SMT solvers manage to handle each small step which eventually lead to the proof of the initial lemma.
For example, the postcondition (18) of update_t is not provable by AltErgo. In order to prove it, we introduce the lemma (update_t_ensures1) where is , the precondition, and is the performed update.
(update_t_ensures1) 
Proving the postcondition with SMT solvers is straightforward given the previous lemma but again proving the lemma itself is beyond their capabilities. However with the additional lemma (update_t_ensures1_l0) the proof becomes obvious to the SMT solver. Indeed the lemma is just the expansion of the ACC definition.
(update_t_ensures1_l0) 
Step by step, we introduce new lemmas to prove the previous ones: e.g. to prove the goal (update_t_ensures1_l0) we introduce the following three lemmas:
(update_t_ensures1_l3) 
(update_t_ensures1_l2) 
(update_t_ensures1_l1) 
Each lemma depends on other lemmas, and so do contracts, all this logic results shapes a proof tree. The one for the first ensures of update_t is presented in Fig. 10.
These small steps are usually bigger than the application of a tactic in a proof assistant. SMT solvers are also more resilient to a change in the source code. For these two reasons we decided not to write proofs directly within a proof assistant but to use SMT solvers. Moreover, all these intermediate lemmas are eventually automatically generated by the autocoding framework.
0.5 Experimentations
We implemented the presented approach: considering a provided LP problem, a source file is generated as well as a considerable amount of function contracts and additional lemmas – all expressed in ACSL [8], the annotation language of the FramaC verification platform. The function contracts specify the functional requirements of this customcode and instancespecific solver: it computes a feasible and optimal solution in a predictable – and known – number of iterations.
Now, this specific code has to be validated before being used on the embedded device. We rely on FramaC to perform the formal verification process. Each generated annotation becomes a proof objective and is submitted to an SMT solver, AltErgo [10], for verification. Thanks to our introduction of all these intermediate lemmas, each local lemma is proved as well as the global function contracts.
Note that the generated code is a direct implementation of the chosen primal IPM for LP. The main concern here is the proof. We present here our experimentations. First, we address issues with respect to the provers (FramaC and AltErgo) and how we tuned the code and proof generation to ease the proofs. Second, we present the results in terms of goals proved or notyetproved and the time required to perform the proofs.
0.5.1 Limitation of the approach: Asserted lemmas.
The verification is performed thanks to two sets of axioms we developed, supporting (1) the definition of matrices in ACSL or (2) properties of operators in linear programming. While the first set describing matrices commutation or norm positivity could be proved, it is an additional work outside of the scope of our contribution. The second set is more challenging to prove and corresponds to the axiomatization of some theorems from [7]. Their definition as axioms was carefully addressed and the number of such axioms minimized.
We deliberately choose to ignore:

the proof of soundness of the Cholesky resolution;

the validation of the gradient or Hessian computations;

some vector and matrix related properties: eg. matrix multiplication associativity or norm positivity (scalar product);

more involved optimization related theorems used within the proof of optimality, and developed in [12].
All these axioms are potential flaws for the verification. One solution could be to translate them within a Theorem prover such as Coq [13] and proving them instead of assuming them. Translating matrices to a theorem prover would also have the advantage to check the matrix axiomatic we wrote. This could be done by providing a bijection between the translation of our matrices and matrices defined in Coq libraries like SSReflect [14].
Last, the analytic center has to be provided, at least within the neighborhood, which is performed automatically through a dedicated algorithm in our experiments. While its existence cannot be guaranted in theory, its computation in MPC context can be eased (cf. Remark 2).
0.5.2 Results: Scalability of proof process
While the customcode itself ought to be executed on an embedded system with limited computation power, the verification of its soundness can be performed on a regular desktop computer.
While all contracts and lemmas were proved, except the set of axioms we assumed (cf. previous Section), the time and therefore the scalability of the proof process largely depend on the problem size.
We provide here some figures regarding the total time required to perform the proofs. The verification has been performed on 2.6GHz computer running Linux with 4GB of RAM. AltErgo does not parallelize computations.
Total computation time.
The total proof time directly depends on the size of the input problem. This is expected since a problem will manipulate arrays of that size and express properties over that many variables.
We used our framework on a large set of LP problems of various sizes. The boundedness constraint of the feasible set (cf. Sec. 0.2) has been artificially enforced by adding a hypercube constraint. Therefore problems over variables have at least constraints. In practice, we generated about 100 random instances for each pair and perform the proofs with both a 1 second and 10 seconds timeouts, for each goal.
An interesting outcome is the relative independence of the proof time with respect to the actual numerical values of the constraints. Since we recorded the computation time for each instance of a pair, we compare the experimental results. As an example, for problems of size , the total computation time ranges in . Note that sparse constraints will generate more optimized code, minimizing the number of computations, but are likely to generate harder problem for the solvers.
The number of generated goals is impressive: for a problem of size , we generate automatically goals. We can see that both the number of proofs is linear in but quadratic in . Let us now have a closer look at the proof costs depending on the category of proof.
Scalability of proofs.
In order to precisely keep track of the proof time of each function, we stored them in separate files and recorded the associated proof time. Figure 11 presents such results. Files and their proof time have been packaged by clusters depending on their ability to scale, as identified by timeouts. For example one can identify the following clusters: lemma denoting the set of ACSL lemmas, atomic denoting local atomic manipulations of matrix elements, matrix denoting matrix level operation and atom to mat denoting an intermediary results between the last two. We isolate one of the file from the last cluster : atom to mat* since it was longer to prove than the others. Other categories correspond to individual files implementing higher level functions of the IPM: update_dt, update_dx, compute_dt, compute_dx and compute.
As expected, some properties are difficult to prove automatically. These scalability issues mainly occur in compute_dt, compute_dx and the set_ and set_in clusters. However, the associated proofs are not mathematically challenging and the main limitation seems to be caused by the FramaC encoding of goals as SMT proof objectives. The future versions of FramaC may have better memory model, identifying unmodified variables in predicates, leading to better proof times of our generated instances.
An annotated code, automatically generated by our framework, is accessible at https://github.com/davyg/proved_primal, annotations can be found in the header build_primalTest/code/primalTest.h which itself includes primalTest_contracts.h the file containing all the function contracts.
0.6 Related work
Related works include first activities related to the validation of numerical intensive control algorithms. This article is an extension of Wang et al [15] which was presenting annotations for a convex optimization algorithm, namely IMP, but the process was both manual and theoretical: the code annotated within Matlab and without machine checked proofs. An other work from the same authors [9] presented a similar method than ours but limited to simple control algorithms, linear controllers. The required theories in ACSL were both different and less general than the ones we are proposing here.
Concerning soundness of convex optimization, Cimini and Bemporad [16] presents a termination proof for a quadratic program but without any concerns for the proof of the implementation itself. A similar criticism applies to Tøndel, Johansen and Bemporad [17] where another possible approach to online linear programming is proposed, moreover it is unclear how this could scale and how to extend it to other convex programs. Roux et al [18, 19] also presented a way to certify convex optimization algorithm, namely SDP and its sumofSquares (SOS) extension, but the certification is done a posteriori which is incompatible with online optimization.
A last set of works, e.g. the work of Boldo et al [20], concerns the formal proof of complex numerical algorithms, relying only on theorem provers. Although code can be extracted from the proof, the code is usually not directly suitable for embedded system: it is too slow and requires different compilation steps which should also be proven to have the same guarantee than our proposed method.
0.7 Conclusion and future works
This article focuses on the proof a Interior Point Method (IPM) convex optimization algorithm as used in stateoftheart linear Model Predictive Control (MPC). The current setting is the simplest one: a primal IPM for Linear Programming problems.
In these MPC approaches convex optimzation engines are autocoded from an instance specification and embed on the cyberphysical system. Our approach relies on the autocoding fremwork to generate, along the code, the specification and the required proof artifacrs. Once both the code and these elements are generated, the proof is achieved automatically by FramaC, using deductive methods (weakest precondition) and SMT solvers (AltErgo).
This is the first proof, at code level, of an IPM algorithm. Still the proposed approach is a proofofconcept and has some identified limitations. First, we worked with real variables to concentrate on runtime errors, termination and functional specification and left floating points errors for a future work. Then, some mathematical lemmas were asserted since our goal was to focus on the algorithm itself rather than proving that a norm ought to be positive. The set of asserted lemmas is still limited and reasonable. Last the setting is simple: a primal algorithm for LP. The proposed approach is a first step and can naturally be extended to more sophisticated Interior Point Methods (IPM): considering primaldual algorithms, or quadratic constraints. The limitation is to restrict to IPM algorithms that guaranty the computation of a feasible solution. For example this does not include homogenized versions [21, §11, Bibliography] of IPM that do not compute iterates within the feasible set.
Another outcome is the general approach to deal with the proof of complex numerical algorithms which are autocoded from an instance description. Code generation is now widespread for embedded devices, especially in control, and could be fitted with formal specification and proof artifacts, to automatize the proof at code level. This would require further developments to axiomatize the associated mathematical theories.
References
 [1] Blackmore, L.: Autonomous precision landing of space rockets. National Academy of Engineering, Winter Bridge on Frontiers of Engineering 4(46) (December 2016)
 [2] Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx (March 2014)
 [3] Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM 12(10) (1969) 576–580
 [4] Floyd, R.W.: Assigning meanings to programs. Proceedings of Symposium on Applied Mathematics 19 (1967) 19–32
 [5] Cuoq, P., Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Framac: a software analysis perspective. SEFM’12, Springer (2012) 233–247
 [6] Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM 18(8) (1975) 453–457
 [7] Nesterov, Y., Nemirovski, A.: Interiorpoint Polynomial Algorithms in Convex Programming. Volume 13 of Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (1994)
 [8] Baudin, P., Filliâtre, J.C., Marché, C., Monate, B., Moy, Y., Prevosto, V.: ACSL: ANSI/ISO C Specification Language. version 1.11. http://framac.com/download/acsl.pdf
 [9] HerenciaZapana, H., Jobredeaux, R., Owre, S., Garoche, P.L., Feron, E., Perez, G., Ascariz, P.: Pvs linear algebra libraries for verification of control software algorithms in c/acsl. In Goodloe, A., Person, S., eds.: NASA Formal Methods  Forth International Symposium, NFM 2012, Norfolk, VA USA, April 35, 2012. Proceedings. Volume 7226 of Lecture Notes in Computer Science., Springer (2012) 147–161
 [10] Bobot, F., Conchon, S., Contejean, E., Lescuyer, S.: Implementing polymorphism in smt solvers. In: Proceedings of the Joint Workshops of the 6th International Workshop on Satisfiability Modulo Theories and 1st International Workshop on BitPrecise Reasoning. SMT ’08/BPR ’08, New York, NY, USA, ACM (2008) 1–5
 [11] Reynolds, J.C.: Separation logic: a logic for shared mutable data structures. In: Proceedings 17th Annual IEEE Symposium on Logic in Computer Science. (2002) 55–74
 [12] Nesterov, Y.: Introductory lectures on convex optimization : a basic course. Applied optimization. Kluwer Academic Publ., Boston, Dordrecht, London (2004)
 [13] Bertot, Y., Castéran, P.: Interactive Theorem Proving and Program Development. Springer Berlin Heidelberg (2004)
 [14] Gonthier, G., Mahboubi, A., Tassi, E.: A Small Scale Reflection Extension for the Coq system. Research Report RR6455, Inria Saclay Ile de France (2016)
 [15] Wang, T., Jobredeaux, R., Pantel, M., Garoche, P.L., Feron, E., Henrion, D.: Credible autocoding of convex optimization algorithms. Optimization and Engineering 17(4) (Dec 2016) 781–812
 [16] Cimini, G., Bemporad, A.: Exact complexity certification of activeset methods for quadratic programming. IEEE Transactions on Automatic Control PP(99) (2017) 1–1
 [17] Tøndel, P., Johansen, T.A., Bemporad, A.: An algorithm for multiparametric quadratic programming and explicit MPC solutions. Automatica 39(3) (2003) 489–497
 [18] Roux, P.: Formal proofs of rounding error bounds  with application to an automatic positive definiteness check. 57(2) (2016) 135–156
 [19] MartinDorel, É., Roux, P.: A reflexive tactic for polynomial positivity using numerical solvers and floatingpoint computations. In Bertot, Y., Vafeiadis, V., eds.: Proceedings of the 6th ACM SIGPLAN Conference on Certified Programs and Proofs, CPP 2017, Paris, France, January 1617, 2017, ACM (2017) 90–99
 [20] Boldo, S., Faissole, F., Chapoutot, A.: Roundoff error analysis of explicit onestep numerical integration methods. In: 2017 IEEE 24th Symposium on Computer Arithmetic (ARITH). (July 2017) 82–89
 [21] Boyd, S., Vandenberghe, L.: Convex optimization. Cambridge University Press, New York, NY, USA (2004)
Comments
There are no comments yet.