1 Introduction
CyberPhysical Systems (CPS) are characterized by the tight interconnection of cyber and physical components. CPS are not only prone to actuator and sensor failures but also to adversarial attacks on the control and sensing modules. Security of CPS is no longer restricted to the cyber domain, and recent incidents such as the StuxNet malware [20] and the security flaws reported on modern cars [13, 19] motivated the recent interest in security of CPS, (see for example, [6, 41, 1, 26] and references therein). During the last decade, a number of security problems have been tackled by the control community, e.g., denialofservice [45, 8, 34, 14], replay attacks [27], maninthemiddle attacks [40], false data injection [25], etc.
This paper addresses the problem of state estimation when several sensors and actuators are under attack. We broadly refer to state estimation in the adversarial environment as secure state estimation. Our attack model is quite general and we impose no constraints on the magnitude, statistical properties, or temporal characteristics of the signals manipulated by the adversary.
Secure state estimation has gained the attention of the control community over the past decade [12]. In one line of work, the problem of state estimation and control under sensor attacks is investigated and the authors derived necessary and sufficient conditions under which estimation and stabilization are possible [11]. Shoukry et. al. [36] further refined this condition and called it sparse observability. Chong et. al. [7] found an equivalent condition for continuoustime systems and called it observability under attack. Nakahira et. a. [29] investigated a similar problem while considering the asymptotic correctness of state estimation. The authors relaxed the sparse observability condition to sparse detectability and showed it is a necessary and sufficient condition for asymptotic correctness. The noisy version of this problem has been investigated in the literature [3, 2, 24, 28, 23]. Mishra et. al. [23] derived the optimal solution for Gaussian noise. In this paper, we solve the more general problem of actuator and sensor attacks that includes, as a special case, sensors attacks.
Under the sparse attack model in which an adversary can only target a bounded number of actuators and sensors, state estimation is intrinsically a combinatorial problem. Shoukry et. al. [35] proposed a novel secure state estimator using the Satisfiability Modulo Theory (SMT) paradigm, called ImhotepSMT. The authors only considered attacks on sensors. In this paper we address the more general problem of sensor and actuator attacks and build an SMTbased estimator that can correctly reconstruct the state under both types of attacks.
In another line of work, the problem of secure state estimation has been studied when the exact model of the system is not available [43, 30]. Tiwari et. al. [42] proposed an online learning method by building socalled safety envelopes as it receives attackfree data to detect abnormality in the data when the system is prone to attacks. In [39, 38] the authors considered system identification under sensors attacks. In all of these works, the adversarial agent is restricted to only attacking sensors.
Pasqualetti et. al. [31] investigated the problem of attack detection and identification. The authors related the undetectable and unidentifiable attacks to the zerodynamics of the underlying system. The proposed attack identification mechanism consists of a number of faultmonitor filters that provide formal guarantees for the existence of the attack. The number of filters, however, grows exponentially with the number of attacked sensors/actuators, and therefore hinders scalability. In another work [33], the authors investigated detectibility and identifiability of attacks in the presence of disturbances and the concept of security index is generalized to dynamical systems. The proposed method is inherently combinatorial and does not scale well with the number of attacked sensors and actuators. In this paper, by leveraging the SMT paradigm, we design a state estimator that scales well with the number of sensors and actuators.
Fault isolation and fault detection filters are classical control topics closely related to secure state estimation. The traditional fault tolerant filters can detect faults on actuators and sensors, however, they are not adequate for the purpose of security. Some of these filters assume a priori knowledge (statistical or temporal) of the fault signals [5], an assumption that does not hold in the security framework. The classical fault detection filters [17]
do not guarantee identification of all possible adversarial signals and zerodynamics attacks remain stealthy. As an alternative approach, robustification has been used in order to estimate the state despite sparse attacks by either deploying Kalman filters or principle component analysis
[22, 10]. The main drawback of these methods is the absence of formal guarantees for the correctness of the state. In contrast, the method proposed in this paper is guaranteed to construct the state correctly in spite of attacks on sensors and/or actuators if the number of attacked components is below a specified threshold that depends on the system. In a recent work [15], Harirchi et. al. proposed a novel fault detection approach using techniques from model invalidation. The authors pursued a worstcase scenario approach and therefore their framework is suitable for security. However, necessary and sufficient conditions for state estimation in a general adversarial setting were not investigated in [15]. In this paper, we precisely characterize the class of systems, by providing necessary and sufficient conditions, for which state reconstruction is possible despite sensor and/or actuator attacks.The contributions of this paper can be summarized as follows:

We introduce the notion of sparse strong observability by drawing inspiration from sparse observability [11, 36] and the classical notion of strong observablity [16]. We show this is the relevant property when the adversarial agent not only compromises sensor measurements but can also attack inputs.

We develop an observer by leveraging the SMT approach to harness the exponential complexity of the problem. Our observer consists of two blocks interacting iteratively until the true state is found (see Section 4 for a detailed explanation of the observer’s architecture).

We propose two methods to further decrease the running time of the proposed algorithm by reducing the number of iterations of the observer. The first method exploits heuristics that can be efficiently computed at each iteration. The second method is inspired by the
QuickXplain algorithm [18] that efficiently finds an irreducibly inconsistent set (see Section 4 for a detailed discussion on the aforementioned methods). We demonstrate the scalability of our proposed observer by several numerical simulations.
A preliminary version of some of the results in this paper were presented in [37] where we introduced the notion of sparse strong observability and drew the connection to secure state estimation. However, the formal proofs were not provided due to space limitations. Furthermore, we propose a new observer that outperforms the observer introduced in [37]. This paper is organized as follows. Section 2 introduces notation followed by the attack model and the precise problem formulation. In Section 3, we introduce the notion of sparse strong observability and relate this notion to the problem of state reconstruction when some of the inputs and outputs are under adversarial attacks. This section concludes with the main theoretical contribution of this paper that is Theorem 3. Section 4 is devoted to designing an observer by exploiting the SMT paradigm. Section 5 provides the simulation results followed by Section 6 that concludes the paper.
2 Problem Definition
2.1 Notation
We denote the sets of real, natural and binary numbers by , and
. We represent vectors and real numbers by lowercase letters, such as
, , , and matrices with capital letters, such as . Given a vector and a set , we use to denote the vector obtained from by removing all elements except those indexed by the set . Similarly, for a matrix we use to denote the matrix obtained from by eliminating all rows and columns except the ones indexed by and , respectively, where with for . In order to simplify the notation, we use and . We denote the complement of by . We use the notation to denote the sequence , and we drop the sub(super)scripts whenever it is clear from the context.A Linear Time Invariant (LTI) system is described by the following equations:
(1) 
where , and are the input, state and output variables, respectively, denotes time, and , , and are system matrices with appropriate dimensions. We use to denote the system described by (1). The order of an LTI system is defined as the dimension of its state space. A trajectory of the system consists of an input sequence with its corresponding output sequence. For an LTI system,
(2)  
(3) 
are the observability and invertibility matrices, respectively, where is the order of the underlying system. In this paper, we often work with subsets of inputs and outputs. For a subset of outputs , we use the notation to denote the observability matrix of outputs in the set . For a set of inputs , we use the notation to denote . For , we define its support set as the set of indices of its nonzero components, denoted by . Similarity we define the support of the sequence as . The observer proposed in this paper uses batches of inputs and outputs in order to reconstruct the state. We reserve capital bold letters to denote these batches,
(4)  
(5) 
where . Whenever is the order of the underlying system, we may drop the superscript for ease of notation. For a subset of outputs (inputs), denoted by (), we use the notation () for the batches of length that only consists of outputs (inputs) in the set (). For a vector , we denote a generic norm, norm and norm of by , and .
2.2 System and Attack model
This work is concerned with the problem of state reconstruction of LTI systems. We consider the scenario in which sensors and actuators are both prone to adversarial attacks. The ultimate goal is to reconstruct the state despite these attacks. In this part, we define the attack model and conclude this section with the precise problem statement. The system , is described by the following equations:
(6) 
Without loss of generality we assume to be of full column rank.
Each actuator (sensor) corresponds to one input (output) and we use input (output) instead of actuator (sensor) in the rest of this paper. In this set up the adversary can attack both inputs and outputs. We model these attacks by additive terms and by imposing a sparsity constraint on them,
(7) 
where and are the controllerdesigned input and the observed output, respectively, and and are signals injected by the malicious agent. In the rest of this paper, we refer to these signals as the attack of the adversarial agent. We use the subscript for signals that directly come from/to the system. The controller can only observe and compute the input . This generic attack model is depicted in Figure 1.
When the adversary attacks an input (output) it can change its value to any arbitrary number without explicitly revealing its presence. The only limitation that we impose on the power of the malicious agent is the maximal number of inputs and outputs that can be attacked.
[Bound on the number of attacks]
The number of inputs and outputs under attack are bounded by and , respectively.
Therefore, the malicious agent can attack a subset of inputs and outputs denoted by and ,^{1}^{1}1For ease of exposition, we use to denote underattack inputs while using for the set of attackfree outputs, i.e., the set of underattack outputs is represented by in this paper. respectively, with and , such that and . Note that these sets are not known to the controller and only upper bounds on their cardinality are given. Once the adversary chooses these sets, inputs and outputs outside these sets remain attackfree. This assumption is realistic when the time it takes for the adversarial agent to attack new inputs and outputs is large compared to the time scale of the system.
We now precisely define the main problem we tackle in this paper.
[Secure state estimation]
For the linear system defined by (6) under the attack model defined by (7), what are necessary and sufficient conditions under which the state of the compromised system (6) can be reconstructed with bounded delay?
It is wellknown that the secure state estimation problem, when only outputs are under adversarial attacks, is combinatorial and belongs to the class of NPhard problems [35, 31]. Therefore we are motivated to design an observer that harness the complexity of this problem.
[Secure observer design]
Assumming conditions in Problem 1 are satsified, how can we design an observer that reconstructs the state of the compromised system?
3 Conditions for Secure State Estimation
In this section, we solve Problem 1, i.e., we provide conditions on the system described by (6) under which state reconstruction (with bounded delay) is possible. We first develop the notion of sparse strong observability. This section concludes with Theorem 3 that relates this notion to the solution of Problem 1.
In the absence of attacks, the problem of estimating the state of a system while some of the inputs are unknown has been studied and the notion of strong observability was introduced in the literature [16]. For strongly observable systems, it is possible to estimate the state of the system without the knowledge of inputs. The following definition formalizes this concept.
[Strong observability]
An LTI system is called strongly observable if for any initial state and any input sequence there exists an integer such that can be uniquely recovered from .
Note that is always upperbounded by the order of the system. Linearity implies the following lemma.
An LTI system is strongly observable if and only if implies that .
Please refer to Appendix.
It is straightforward to conclude the following corollary.
An LTI system is not strongly observable if and only if there exist a nonzero intial state and an input sequence such that for .
Follows directly from Lemma 3.
It is wellunderstood that when the adversary is restricted to attacking outputs, state reconstruction is possible only if there is enough redundancy in the outputs of the system. This redundancy can be stated in terms of observability of the system while removing a number of outputs. This property has been formalized in [11] and is called sparse observability [36]. By analogy with sparse observability, we define the notion of sparse strong observability as follows:
[sparse strong observability]
An LTI system with inputs and outputs is sparse strongly observable if for any and with and , the system is strongly observable.
Note that in Definition 3, the value of and are upper bounded by the number of inputs and outputs, respectively. This modified notion of strong observability is the key for formalizing redundancy across inputs and outputs. We show that a necessary and sufficient condition for secure state estimation can be stated using this property. Note that sparse strong observability is equivalent to the notion of sparse observability that was introduced before in the literature [11, 36, 23]. The following theorem is the main theoretical result in this paper.
Let the number of attacked inputs and outputs be bounded by and , respectively. Under the attack model (7), the state can be reconstructed (possibly with delay) if and only if the underlying system is sparse strongly observable.
It is worth mentioning that the maximum number of attacked outputs, , cannot be greater than and it is an inherent limitation of LTI systems with outputs [11]. However the maximum number of attacked inputs is not inherently restricted by and can take values up to , depending on the specific system under the consideration.
Pasqualetti et. al. [31] addressed the problem of attack detection and identification in the presence of adversarial inputs and outputs for continuoustime LTI systems. They showed that attack identification is possible if and if for any and with and , the system does not have any invariant zeros.
It is clear that from the state and the dynamics of the system, the attack can be identified, therefore the attack identification comes free with the solution to the secure estimation problem. Strongly observable LTI systems do not have any invariant zeros (see, for example Theorem 1.8 in [16]). Therefore this theorem shows that under this sparseattack model, the conditions for identifying the attack also enable one to reconstruct the state, i.e., characterizations of attack identifiability and secure state estimation are equivalent for LTI systems. Putting these together, secure state estimation also comes with the solution to the attack identification problem. However, we provide a direct proof that does not require this machinery.
First we show that sparse strong observability is a sufficient condition for correctly estimating the state. For the sake of the contradiction, assume that the state cannot be reconstructed, i.e., there exist two different (initial) states, denoted by and , that cannot be distinguished under this attack model. More precisely, there exist two attack strategies that will lead to the same exact (observed) trajectories. We reserve superscripts and for variables across those scenarios. Let us denote the adversarial additive terms by and . We represent the corresponding inputs and outputs of the system by and , and the common (corrupted) measured output and the controller input sequences are denoted by and , respectively.
By the assumption of the attack model (7), there exist for with bounded cardinality such that
(8) 
for . Note that
(9) 
where is the controller designed input. Therefore
(10) 
Similarly, it is straightforward to conclude that . We are ready to reach the contradiction. The underlying system is LTI, thus the input sequence with the initial state generates the output sequence . The underlying system is sparse strongly observable so the subsystem is strongly observable for any and . Let us choose and as any set of inputs and outputs such that,
(11) 
Note that is a zero sequence, hence by Lemma 3 we conclude that the corresponding initial state () is zero, which contradicts the assumption of .
Now we prove that sparse strongly observability is a necessary condition. For the sake of contradiction, suppose that the system described by (6) is not sparse strongly observable, however, reconstructing the state (possibly with delays) is still possible. We construct two system trajectories with different (initial) states that have exactly the same input and output sequences under suitable attack strategies (additive terms). This implies that estimating the correct state is indeed impossible thereby establishing the desired contradiction.
By the assumption of the contradiction, the underlying system is not sparse strongly observable, so there exist subsets of inputs and outputs denoted by with and with , respectively, such that is not strongly observable. Corollary 3 implies that there exist an initial condition and an input sequence (with its support lying inside ) that generates an output sequence with . One can rewrite and as sum of two sparse signals, more precisely:
(12)  
(13) 
where cardinality of and are upperbounded by and for , respectively. For example, we can rewrite where for . Then we define
Now consider the following two different trajectories of the system
(14) 
with their initial states
(15) 
and their corresponding attack strategies,
(16) 
It is straightforward to verify that and , i.e., under the attack model (7) the controlled inputs and the observed outputs are exactly the same for both trajectories while having different states, therefore the proof is complete.
4 Secure Observer Design
In this section, we seek solutions to Problem 1. In the first part, we explain the intuition behind the proposed algorithm that estimates the state despite attacks on inputs and outputs. We give formal guarantees that the algorithm reconstructs the state correctly. In the second part, we introduce the observer by leveraging the SMT paradigm followed by two methods that enhance the run time of state estimation.
Based on the attack model (7), the input to the system is decomposed into two additive terms, the controllerdesigned input and the adversarial input . The underlying system (6) is linear and therefore we can easily exclude the effect of the controllerdesigned input from the output by subtracting its effect. Hence, without loss of generality we assume that the true is zero.
The proposed algorithm is based on the following proposition.
Suppose the underlying system is sparse strongly observable, and the number of attacked inputs and outputs are bounded by and , respectively. Given any subset of inputs and outputs denoted by and with and , the first statement below implies the second:

There exist and such that
(17) 
The estimated state , is equal to the actual state of the system at time , , where is the order of the underlying system.
The underlying system is sparse strongly observable therefore is strongly observable. If (17) has a solution, then would be the unique solution for (see section IIIB of [44]). Let us denote the set of attackfree outputs and underattack inputs by and . At most outputs are under attack, therefore . Note that can be written as follows:
(18) 
On the other hand, we can rewrite (17) by taking only outputs in ,
(19) 
where is a zero vector with appropriate dimensions. The underlying system is sparse strongly observable, therefore we conclude that the subsystem is strongly observable. One can reinterpret both equations as two (possibly different) valid trajectories of the system that share the same output sequence. Strong observability of implies that which completes the proof. The main algorithm in this paper builds upon this proposition. We search for a set of inputs and outputs that satisfies equality (17), i.e., we check if there exist and that make equality (17) hold. Based on Proposition 4, we define a consistency check as follows,
Test 1 (Consistency Check).
Given subsets of inputs and outputs denoted by and , TEST returns true if
(20) 
where is the solver tolerance, due to numerical errors. However, for the sake of clarity, we focus in this paper on the case when is negligible^{2}^{2}2Note that the minimum always exists for (20) as the cost function is a semidefinite quadratic function..
Finding the right subset of inputs and ouputs that satisfies this test is a combinatorial problem in nature and requires exhaustive search. It is wellknown that secure state estimation under this attack model is in general NPhard [35, 31]. This test is depicted in Algorithm 4.1.
In the rest of this section, we introduce an architecture for our observer followed by methods to improve its computational performance. For each input (output), we assign a binary variable
() that indicates if the corresponding input (output) is under attack or not, i.e., () if the input (output) is under attack. In the rest of this paper, we use the bold letters ( and ) to denote these Boolean variables and we reserve nonbold type face ( and ) as instances of them. Finding the right assignment of these Boolean variables is combinatorial in nature and in order to efficiently decide which set of inputs and outputs satisfies the TEST in (20), we design an observer using the lazy SMT paradigm [4].4.1 Overall Architecture
The observer consists of two blocks that interact with each other, a propositional satisfiability (SAT) solver and a theory solver. The former reasons about the combination of Boolean and pseudoBoolean constraints and produces a feasible instance of and , based on its current state. The theory solver checks the consistency of Boolean variables using the consistency test, and when the test fails, it encodes the inconsistency as a pseudoBoolean constraint and returns it to the SAT solver. The general architecture is depicted in Figure 2.
The initial pseudoBoolean constraint only bounds the number of attacked inputs and outputs, i.e.,
(21) 
Initially, the SAT solver generates instances of and that satisfy . The theory solver checks whether and satisfies the consistency check. If the test is satisfied, then the algorithm terminates and returns the (delayed) estimate of the state. Otherwise, the theory solver outputs UNSAT and generates a reason for the conflict, a certificate, or a counterexample that is denoted by . This counterexample encodes the inconsistency among the chosen inputs and outputs. The following always constitutes a naive certificate.
(22) 
On the next iteration, the SAT solver updates the constraint by conjoining to , and generates another feasible assignment for and . This procedure is repeated until the theory solver returns SAT as illustrated in Algorithm 4.1.
[h]
Note that Proposition 4 implies that the SAT solver eventually produces an assignment that satisfies the consistency test and therefore Algorithm 4.1 always terminates. The size of the certificate plays an important role in the overall execution time of the algorithm [35]. Note that the attack model considered in [35] is restricted to outputs, and the major contribution of our work is to handle both input and output attacks. In the next section, we focus on constructing shorter counterexamples to improve the run time.
4.2 SAT certificate
In this part, we improve the efficiency of Algorithm 4.1 by constructing a shorter certificate (counterexample or conflicts). As it was discussed before, the naive certificate only excludes the current assignment of and from the search space of the SAT solver, however, by exploiting the structure of the underlying system, we show that we can further decrease the size of the certificate and therefore prune the search space more efficiently.
One of the main results of this paper is to show that we can always find a smaller conflicting subset of inputs and outputs. We propose two methods for generating shorter certificates. The first method reduces the size of the counterexample by at least , we explain this method in Lemma 4.3 and give a formal proof of the existence of such shorter certificate. In practice, however we observe the reduction in the length of conflicts is much larger than this theoretical bound. The second method is inspired by the QuickXplain algorithm. This method generates counterexamples that are irreducible, meaning that we cannot reduce the size of the counterexample by removing some of it’s entries. We also note that by generating multiple certificates at each iteration we can further enhance the execution time. At the end of this section Lemma 4.4 states that for a generic LTI system the size of the certificate cannot be smaller than .
Let us assume that the SAT solver hypothesized and as the set of compromised inputs and safe outputs, respectively. The main intuition behind both methods is to look for and that would not satisfy the consistency test. Note that the certificate consists of inputs in and outputs in .
4.3 Method I based on heuristics
Method I reduces the size of the certificate by increasing the size of (supposedly under attack) inputs () followed by decreasing the size of (supposedly safe) outputs (). The summary of the above procedure of shortening certificates is illustrated in Algorithm 4.3. We begin by adding inputs to while making sure TEST still returns false and the number of inputs is bounded by . Let us denote this new set of inputs by .
At the second step, we shrink the set of conflicting outputs in order to further shorten the size of the counterexample. Let us denote a subset of of size by . The following lemma shows we can reduce the size of conflicting outputs at least by .
Assume that the system is sparse strongly observable, and the number of attacked inputs and outputs are bounded by and , respectively. Pick any subset of inputs and outputs denoted by and with and , that do not satisfy the consistency check (20). Given any subset of at most outputs denoted by , one of the following is true:

TEST() returns false,

There exists an output such that TEST() returns false.
Please refer to Appendix.
We denote this smaller set of conflicting outputs (if TEST() returns false, otherwise ) by . Lemma 4.3 gives formal guarantees of the existence of shorter certificates which hold no matter how the subsets of inputs and outputs ( and ) are chosen. This lemma shows that Method I reduces the size of the certificate by at least .
In practice, we choose these subsets based on heuristics that have for objective a decrease in the overall running time. We assign slack variables to inputs and outputs similarly to [35] and [37], and sort them based on the structure of the system. Recall that Algorithm 4.3 shortens the certificate by reducing the number of inputs followed by the reduction in the number of outputs, i.e., we simultaneously reducing both inputs and outputs in the certificate. We observe that by generating two counterexamples, we can prune the search space of the SAT solver more efficiently. Similarly to Algorithm 4.4, we can find two counterexamples by reducing the number of inputs following a reduction in the number of outputs and viceverse.
Sorting and :
Assuming TEST() returns false, we assign slack variables to inputs in and outputs in , denoted by and , respectively. Let us denote a solution to the optimization (20) inside TEST() by and .
We define for as the norm of the projection of onto the column space of ,
(23)  
This slack variable measures how much of the residual can be justified by considering in addition to . Note that we want to append inputs to
while having a false TEST. We first normalize these slack variables by the norm of the corresponding invertibility matrix, and
is obtained by sorting slack variables in ascending order.We define as the residual of each output:
(24) 
Note that,
(25) 
We first normalize each slack variable by the norm of the corresponding observabality matrix. Recall that we aim to find a smaller subset of while ensuring TEST returns false. We pick the output with the highest slack variable as the first element of . We sort the rest based on the dimension of the kernel of each observability matrix, following the intuition provided in [35].
4.4 Method II based on QuickXplain
The second method (Algorithm 4.4) is inspired by QuickXplain and generates a counterexample by pruning the naivecertificate (22) to make it irreducible. We formally define this property as follows, [Irreducible certificate] A certificate consisting of inputs and outputs is irreducible, if no other subset of it can generate a conflict, i.e., for all subsets denoted by and the following are equivalent:

and generate a conflict.

and .
One cannot prune irreducible certificates and each element is necessary for the set to remain a counterexample. Let be the elements (consisting of inputs and outputs ) of the naive certificate. For ease of exposition we slightly abuse notation to denote by . We denote the output of this algorithm by which consists of inputs and outputs .
This method consists of an exploration phase in which it finds an element (input or output) that belongs to an irreducible certificate. Let us denote an enumeration of by , and the internal state by . This method begins by adding stepbystep elements of to . The first element () that fails TEST is part of an irreducible certificate, and therefore is added to .
In order to find further elements of this certificate, we keep in the background and the first element that fails the consistency check is added to . This repeated process can be implemented efficiently by using the divide and conquer paradigm as depicted in Algorithm 4.4. When an element of is detected we divide the the remaining elements into two disjoint subsets and . We can now recursively apply the algorithm to find a conflict among by keeping the set in the background and a conflict among by keeping the set in the background. This method of finding an irreducible subset is depicted in Algorithm 4.4
Note that the resulting counterexample depends on the initial enumeration of elements in . If the all the inputs (outputs) are ahead of outputs (inputs), then the resulting counterexample mostly consists of inputs (outputs). In order to have the maximal reduction in the search space of the SAT solver at each iteration, we produce three certificate using this method, putting inputs first, outputs first and mixing both inputs and outputs.
In the last part of this section, we look at the certificate size for a generic LTI system. We observe that the certificate size cannot be smaller that the number of inputs which is stated formally in the following lemma.
For a generic LTI system the size of the certificate is always lower bounded by , where is the number of inputs. Please refer to Appendix.
5 Simulation Results
We implemented our SMTbased estimator in Matlab while interfacing with the SAT solver SAT4J [21] and assessed its performance in two case studies, randomly generated LTI systems and a chemical plant. We report the overall running time by using the two proposed methods, Algorithm 4.3 and Algorithm 4.4.
5.1 Random Systems
We randomly generate systems with a fixed state dimension () and increase the number of inputs and outputs. Each system is generated by drawing entries of
according to uniform distribution, when necessary we scale
to ensure that the spectral radius is close to one. In each experiment, twenty percent of inputs and outputs are under adversarial attacks, and we generate the support set for the adversarial signals uniformly at random. Attack signals and the initial states are drawn according to independent and normally distributed random variables with zero mean and unit variance. All the systems under experiment satisfy a suitable sparse strong observability condition as described in Section
3.Figures 3 and 4 report the results of the simulations, each point represents the average of experiments. All the experiments run on an Intel Core i5 2.7GHz processor with 16GB of RAM. We verify the runtime improvement resulting from using the shorter certificates, and , compared to the theoretical upperbound of the bruteforce approach in Figure 3. For instance, consider the scenario with and in Figures 3 and 4. In the bruteforce approach, we require to check all different combinations of inputs and outputs, however, by exploiting either or we observe a substantial improvement. We observe that although gives a worse run time for systems with smaller number of outputs, it scales better compared to when the number of inputs and outputs grow.
Comments
There are no comments yet.