1 Introduction
Observability and controllability are two fundamental structural properties of a control system. The former describes the possibility of inferring the state that characterizes the system from observing its inputs and outputs. The latter characterizes the possibility to move a system in all its space of states, by using suitable system inputs (controls). Both these concepts were first introduced for linear systems [1, 2] and the two analytic conditions to check if a linear system satisfies these two properties have also be obtained. The nonlinear case is much more complex. First, both these concepts become local. In addition, unlike the linear case, observability depends on the system inputs. In this paper we refer to the weak local observability, as defined in [3, 4] (definitions 8, 9, 10, 11, in [4]). Regarding controllability, in this paper we refer to the concept of weak local controllability, as defined in [3].
The two analytic conditions to check if a continuous timeinvariant nonlinear system satisfies these two properties (the weak local observability and the weak local controllability) have also been introduced [3, 4, 5, 6, 7, 8]. They are known as the observability rank condition and the controllability rank condition. They are summarized in section 3.2 and in section 4.2, respectively. Very recently, new analytic conditions have also been proposed. The conditions proposed in [11, 12], extend the observability rank condition to the case when the dynamics are also driven by unknown inputs. The authors of [13] proposed a new condition for the weak local controllability, which presents some advantages with respect to the controllability rank condition.
Unfortunately, all the conditions above cannot be used in the case when the system is timevarying (nonautonomous).
A timevarying system is a system whose behaviour changes with time. In particular, the system will respond differently to the same input at different times. A typical example of timevarying system is an aircraft. For this system, there are two main factors that make it timevarying: decreasing weight due to consumption of fuel and the different configuration of control surfaces during take off, cruise and landing. The first factor will characterize the system investigated in sections 7 and 8.
In a general mathematical characterization of a timevarying nonlinear system, all the key scalar and vector fields that define its dynamics and/or its output functions, explicitly depend upon time (see equation (
1)).For timevarying systems, the two analytic conditions to check observability and controllability have only been obtained in the linear case. These conditions are summarized in section 3.1 and in section 4.1, respectively.
So far, no condition exists to check the weak local observability and the weak local controllability for timevarying nonlinear systems. This is precisely the goal of this paper.
Specifically, the contributions of this paper are the following two:

Extend the observability rank condition to nonlinear timevarying systems.

Extend the controllability rank condition to nonlinear timevarying systems.
The paper is organized as follows. Section 2 provides the basic equations that characterize the systems here investigated. Sections 3 and 4 provide the two new analytic conditions, whose derivations are given separately in section 5. Section 6 provides two simple applications. They are deliberately trivial to better illustrate the two new analytic conditions. Sections 7 and 8 provide a real application. We investigate the observability and controllability properties of a lunar module that operates in presence of gravity and in absence of an atmosphere. This system has an explicit time dependence due to the fuel consumption that results in a variation of the weight and the variation of the moment of inertia. Finally, our conclusion is given in section 9.
2 Considered systems
We will refer to a nonlinear control system with inputs (). The state is the vector
with an open set of . We assume that the dynamics are nonlinear with respect to the state and affine with respect to the inputs. We account for an explicit time dependence, namely, all the functions that characterize the dynamics and/or the outputs, can explicitly depend on time. Finally, the system has outputs. Our system is characterized by the following equations:
(1) 
where , , are vector fields in and the functions are scalar fields defined on the open set . All these vector and scalar fields explicitly depend on time.
3 Analytic condition for observability
This section introduces the analytic condition to check the state observability for systems that satisfy equation (1).
Before introducing this new condition we remind the reader the existing results for the less general systems. Specifically, in section 3.1, we provide the analytic condition that holds in the case of timevarying linear systems and, in section 3.2, we provide the analytic condition that holds in the case of timeinvariant nonlinear systems. In section 3.3, we provide the new condition that holds in general, i.e., for timevarying nonlinear systems.
3.1 Timevarying linear systems
This special case is obtained by setting in (1):

, where is a matrix of dimension .

, where are column vectors of dimension .

, where are rowvectors of dimension .
We can write (1) as follows:
(2) 
where the columns of are the vectors above and the lines of are the vectors above.
The system defined by (2) is observable in a given time interval if there exists and a positive integer such that:
(3) 
where and is defined recursively as:
(4) 
3.2 Timeinvariant nonlinear systems
This special case is obtained when all the vector and scalar fields that appear in (1) do not explicitly depend on time. The analytic condition to check the weak local observability at a given of the state that satisfies (1) is obtained by computing the observable codistribution [7]. When all the vector and scalar fields do not explicitly depend on time, the observable codistribution is generated by the recursive algorithm 1 (see [3, 4, 7]). We use the following notation:

Given a scalar field , is its differential.

Given a vector field (defined on the open set ), denotes the Lie derivative along . We remind the reader that, the Lie derivative along of a given scalar field is [7]:
Additionally:
(5) 
Given a codistribution and a given vector field (both defined on the open set ), denotes the codistribution whose covectors are the Lie derivatives along of the covectors in .

Given two vector spaces and , is their sum, i.e., the span of all the generators of and .
The analytic condition to check the weak local observability of nonlinear timeinvariant systems is given by the following fundamental result:
Theorem 1 (Observability Rank Condition)
Algorithm 1 converges in an open and dense set of and the convergent codistribution is obtained in at most steps. If the convergent codistribution is non singular at and its dimension is equal to at , then the system is weakly locally observable at (sufficient condition). Conversely, if the system is weakly locally observable at , the dimension of the above codistribution is in a dense neighbourhood of (necessary condition).
Proof.
All the statements are very well known results. The reader is addressed to [7] (lemmas 1.9.1, 1.9.2 and 1.9.6) for the convergence properties of algorithm 1. The proof of the sufficient condition is available in [3], theorem 3.1. The proof of the necessary condition is available in [3], theorem 3.11
3.3 Timevarying nonlinear systems
We now consider the general case of timevarying nonlinear systems. In this section we only provide the analytic condition. In section 5.1 we prove its validity.
The new condition is similar to the condition that holds in the case timeinvariant (i.e., the observability rank condition provided in section 3.2). The only difference resides in the computation of the observable codistribution. The new codistribution is given by algorithm 2, where we introduced the following operator:
(6) 
Note that the codistribution returned by the algorithm above is in general timedependent.
The analytic condition to check the weak local observability of nonlinear timevarying systems is given by the following fundamental new result:
Theorem 2 (Extended Observability Rank Condition)
Algorithm 2 converges in an open and dense set of and the convergent codistribution is obtained in at most steps. If the convergent codistribution is non singular at and at a given time and its dimension is equal to at , then the system is weakly locally observable at (sufficient condition). Conversely, if the system is weakly locally observable at , the dimension of the above codistribution is in a dense neighbourhood of (necessary condition).
Proof.
The proof is given in section 5.1
We conclude this section with the following remarks:

Algorithm 2 differs from algorithm 1 only for the recursive step. In particular, the operator given in (6) substitutes the Lie derivative along . In other words, the new algorithm is obtained with the substitution:
If is null, in the recursive step we need to add the term .

If is generated by , the codistribution is generated by (see appendix A). This allows us to easily implement algorithm 2 since it suffices to compute the Lie derivatives of the generators of , at each step.

The extended observability rank condition reduces to the condition provided in section 3.1 in the linear case.
4 Analytic condition for controllability
This section introduces the analytic condition to check the state controllability for systems that satisfy equation (1). The section is structured as the section 3. Specifically, in section 4.1, we provide the analytic condition that holds in the case of timevarying linear systems and, in section 4.2, we provide the analytic condition that holds in the case of timeinvariant nonlinear systems. Finally, in section 4.3, we provide the new condition that holds in general, i.e., for timevarying nonlinear systems.
4.1 Timevarying linear systems
This special case is the same considered in section 3.1. In other words, we refer to the system characterized by (2). This system is controllable if there exists and a positive integer such that:
(7) 
where and is defined recursively as:
(8) 
4.2 Timeinvariant nonlinear systems
This special case is obtained when all the vector and scalar fields that appear in (1) do not explicitly depend on time. The analytic condition to check the weak local controllability from a given is obtained by computing the controllability distribution [7]. When all the vector and scalar fields do not explicitly depend on time, the controllability distribution is generated by the recursive algorithm 3 (see [7]). We use the following notation:

Given two vector fields (defined on the open set ), denotes their Lie bracket, defined as follows:
(9) 
Given a distribution and a given vector field (both defined on the open set ), denotes the distribution whose vectors are the Lie bracket of any vector in with .
The analytic condition to check the weak local controllability of nonlinear timeinvariant systems is given by the following fundamental result:
Theorem 3 (Controllability Rank Condition)
Algorithm 3 converges in an open and dense set of and the convergent distribution is obtained in at most steps. If the convergent distribution is non singular at and its dimension is equal to at , then the system is weakly locally controllable from (sufficient condition). Conversely, if the system is weakly locally controllable from , the dimension of the above distribution is in a dense neighbourhood of (necessary condition).
Proof.
4.3 Timevarying nonlinear systems
We now consider the general case of timevarying nonlinear systems. In this section we only provide the analytic condition. In section 5.2 we prove its validity.
The new condition is similar to the condition that holds in the case timeinvariant (i.e., the controllability rank condition provided in section 4.2). The only difference resides in the computation of the controllability distribution.
The new distribution is given by algorithm 4, where we introduced the following operator:
(10) 
Note that the distribution returned by the algorithm above is in general timedependent.
The analytic condition to check the weak local controllability of nonlinear timevarying systems is given by the following fundamental new result:
Theorem 4 (Extended Controllability Rank Condition)
Algorithm 4 converges in an open and dense set of and the convergent distribution is obtained in at most steps. If the convergent distribution is non singular at and at a given time and its dimension is equal to at , then the system is weakly locally controllable from (sufficient condition). Conversely, if the system is weakly locally controllable from , the dimension of the above distribution is in a dense neighbourhood of (necessary condition).
Proof.
The proof is given in section 5.2
We conclude this section with the following remarks:

Algorithm 4 differs from algorithm 3 only for the recursive step. In particular, the operator given in (10) substitutes the Lie bracket with . In other words, the new algorithm is obtained with the substitution:
If is null, in the recursive step we need to add the term .

If is generated by , the distribution is generated by (see appendix B). This allows us to easily implement algorithm 4 since it suffices to compute the Lie brackets of the generators of , at each step.

The extended controllability rank condition reduces to the condition provided in section 4.1 in the linear case.
5 Proofs
Both these proofs are obtained by including in the state the variable time. We denote the new extended state by and we have .
We characterize the system in (1) by using the extended state. From the first equation in (1) we obtain:
(11) 
(12) 
Regarding the outputs, we remark that we need to include a new output that is . Indeed, it is a common (and implicit) assumption that all the system inputs and the outputs are synchronized. For instance, in a real system, the inputs and outputs are measured by sensors. The sensors provide their measurements together with the time when each measurement has occurred. This means that our system is also equipped with an additional sensor that is the clock (i.e., a sensor that measures time). Therefore, a full description of our system in the extended state is given by:
(13) 
5.1 Proof of theorem 2
By introducing the extended state we transformed our original nonautonomous system in (1) into the autonomous system in (13). We are allowed to use the results stated by theorem 1. The algorithm that provides the observable codistribution in the extended state is algorithm 5.
We denoted by the differential in the extended state. From theorem 1 we know that algorithm 5 converges in an open and dense set of and the convergent codistribution is obtained in at most steps. If the convergent codistribution is non singular at and its dimension is equal to at , then the system is weakly locally observable at . Conversely, if the system is weakly locally observable at , the dimension of the above codistribution is in a dense neighbourhood of .
From these results, we immediately obtain the proof of the results stated by theorem 2 by using the following fundamental separation property. At every step of algorithm 5, we can split into two codistributions, as follows:
(14) 
where is generated by differentials of scalar fields only with respect to the state (and not the extended state). The validity of the property above is a consequence of the fact that the extended system is characterized by the output and, consequently, we have .
For any such that at a given step, the following covectors belong to at the next step:
From the structure of given in (12) we obtain:
(15) 
As result, by using (5), for any such that at a given step, the following covectors belong to at the next step:
Finally, by using (14), we obtain that, for any such that at a given step, the following covectors belong to at the next step:
This proves that is generated by algorithm 2.
Finally, the convergence of algorithm 5 (and consequently of algorithm 2) occurs in at most steps instead of steps. This is proved as follows. The dimension of at the initialization satisfies:
in an open and dense set of . Indeed, at the initialization, span spanspan spanspan span in an open and dense set of . By using this property, from lemmas 1.9.1, 1.9.2 and 1.9.6 in [7] we immediately obtain that the convergence of algorithm 5 is achieved in at most steps
5.2 Proof of theorem 4
We know that the first component of (i.e., the time ) is not controllable and it will be not surprising to obtain this result through our analysis.
We use algorithm 3 to compute the controllability distribution in the extended state for the system defined by (13). We obtain:
From theorem 3 we know that algorithm 6 converges in an open and dense set of and the convergent distribution is obtained in at most steps. If the convergent distribution is non singular at and its dimension is equal to at , then the system is weakly locally controllable from . Conversely, if the system is weakly locally controllable from , the dimension of the above distribution is in a dense neighbourhood of .
From these results it is immediate to prove the results stated by theorem 4.
We have ():
and by using (10) we have:
(16) 
Similarly, we also obtain:
(17) 
(18) 
6 Simple illustrative examples
To illustrate the two new conditions for observability and controllability, we provide two examples. Note that they are deliberately very trivial to better figure out the main features of the two algorithms.
6.1 Observability
We consider the system given in (1) with ,
where, in the function , is the component of the state and is to the power of . We use algorithm 2 to compute the observable codistribution. In the following, we denote by the codistribution returned by algorithm 2 after steps. We obtain the following result. span, with:
We compute . We need to compute and . We have:
Hence span with:
Since , we need to repeat the recursive step and compute . We need to compute the two scalar fields:
Hence, span, with:
and . By proceeding in this manner we finally obtain span and . We conclude that the state is weakly locally observable.
We remark that, in this driftless case, we have:
Therefore, the observable codistribution is obtained by only considering the output and its time derivatives up to the order. In other words, the result is independent of the system input. We would obtain the weak local observability by setting . In addition, we would also obtain the weak local observability for the system with and characterized by the output
where are scalar functions with nonzero derivative (the case considered above corresponds to the case , )
This result is not surprising. The output weights the components of the state in a different manner. For instance, for , it suffices to take the output at two distinct non vanishing times, , to obtain two independent equations in the two components of the state. The same holds with the output . In this case we first obtain and then, since the functions have non vanishing derivative, they can be inverted to give the components of the state.
Finally, note that the case characterized by and can be investigated by using the method in section 3.1, for linear timevariant systems. The result that we obtain is the same.
6.2 Controllability
We consider the system given in (1) with ,
We use algorithm 4 to compute the controllability distribution. As for the case of observability, we denote by the distribution returned by algorithm 4 after steps. We obtain the following result.
We compute . We need to compute . We have:
Hence:
Since , we need to repeat the recursive step and compute . We need to compute the following vector fields: , , and
Hence:
By proceeding in this manner we finally obtain:
and . We conclude that the system is weakly locally controllable.
7 Aerospace application
We consider a rocket, like a lunar module, that moves in the presence of gravity and in the absence of an atmosphere. We assume that it is equipped with a monocular camera able to detect a point feature on the ground. Without loss of generality, we introduce a global frame whose origin coincides with the point feature and its
axis points vertically upwards. We will adopt lowercase letters to denote vectors in this frame. We define the rocket local frame as the camera frame. In addition, we assume that, with respect to this frame, the moment of inertia tensor is diagonal. In particular, by approximating the rocket with a cylinder, the vertical axis of the local frame is along the cylinder axis and, the rocket’s center of gravity, belongs to this axis. We will adopt uppercase letters to denote vectors in the local frame. Fig
1 illustrates our system.We adopt a quaternion to represent the rocket orientation. Indeed, even if this representation is redundant, it is very powerful since the dynamics can be expressed in a very easy and compact notation (see [14]).
Our system is characterized by the state:
(19) 
where:

is the position of the rocket in the global frame.

is the speed of the rocket in the global frame.

is the unit quaternion that describes the rotation between the global and the local frames^{1}^{1}1A quaternion is a unit quaternion if the product with its conjugate is , i.e.: (see [14])..

, is the angular speed expressed in the local frame (note that we adopted the symbol instead of because the latter has been already used to denote the observable codistribution).

is the magnitude of the gravitational acceleration, which is unknown.
Additionally, we introduce the following quantities:

is the magnitude of the force provided by the main engine of the rocket. The direction of this force is along the vertical axis of the local frame.

is the mass of the rocket.

is the moment of inertia tensor of the rocket, which is diagonal .

denotes the torque that acts on the rocket and which is powered by the secondary rocket engines.
In the following, for each vector defined in the space, the subscript will be adopted to denote the corresponding imaginary quaternion. For instance, . By using the properties of the unit quaternions, we can easily obtain vectors in the global frame starting from the local frame and viceversa. For instance, given in the local frame, we build , then we compute the quaternion product . The result will be an imaginary quaternion^{2}^{2}2The product of a unit quaternion times an imaginary quaternion times the conjugate of the unit quaternion is always an imaginary quaternion., i.e., . The vector is the rocket angular speed in the global frame. Conversely, to obtain this vector in the local frame starting from , it suffices to compute the quaternion product .
By using this notation, the rocket acceleration generated by the main engine in the global frame is and, by including the gravity, we have:
where is the fourth fundamental quaternion unit ().
Note that, the mass decreases during the maneuver, due to the fuel consumption. We assume that , where is the initial mass and characterizes the consumption rate, which is constant. We denote by the time required to terminate the entire amount of fuel. Finally, we assume that , meaning that the weight of the entire amount of fuel is much smaller than the weight of the rocket. Under these assumptions, we can use the following approximation:
(20) 
where , and .
To complete the derivation of the dynamics, we need to deal with the angular components. We start by reminding the reader the Euler’s equation for the rigid body dynamics:
(21) 
where the symbol "" denotes the vector product. In our reference frame, where is diagonal, we obtain the following three equations:
(22) 
We compute the values of the moment of inertia in our reference frame. We approximate the rocket with a cylinder. Since the center of gravity belongs to the vertical axis, we have:
(23) 
where is the radius of the approximating cylinder. The computation of the other two components, and , is a bit more complex since, due to the fuel consumption, the center of gravity moves (we assume that it moves along the axis). We use the Parallel axis theorem [15]. This theorem is very simple and it allows us to compute the moment of inertia of a rigid body about any axis, given the body’s moment of inertia about a parallel axis through the object’s center of gravity and the perpendicular distance between the axes. Specifically:
where is the moment of inertia with respect to the axis , is the moment of inertia with respect to the axis parallel to but passing through the object’s center of gravity, and is the distance between the two axes.
Because of the symmetry of the cylinder, . In addition, from the Parallel axis theorem we have:
can be easily computed. For instance, for a cylinder of mass , radius and length it is:
By using the same approximation given in (20), we introduce the following approximations: