ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

Recent research demonstrated that the superficially well-trained machine learning (ML) models are highly vulnerable to adversarial examples. As ML techniques are rapidly employed in cyber-physical systems (CPSs), the security of these applications is of concern. However, current studies on adversarial machine learning (AML) mainly focus on computer vision and related fields. The risks the adversarial examples can bring to the CPS applications have not been well investigated. In particular, due to the distributed property of data sources and the inherent physical constraints imposed by CPSs, the widely-used threat models in previous research and the state-of-the-art AML algorithms are no longer practical when applied to CPS applications. We study the vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples used as ML model input that meet the intrinsic constraints of the physical systems. We first summarize the difference between AML in CPSs and AML in existing cyber systems and propose a general threat model for ConAML. We then design a best-effort search algorithm to iteratively generate adversarial examples with linear physical constraints. As proofs of concept, we evaluate the vulnerabilities of ML models used in the electric power grid and water treatment systems. The results show that our ConAML algorithms can effectively generate adversarial examples which significantly decrease the performance of the ML models even under practical physical constraints.



page 3

page 11

page 12


Semantic Adversarial Deep Learning

Fueled by massive amounts of data, models produced by machine-learning (...

Learning Physical Concepts in Cyber-Physical Systems: A Case Study

Machine Learning (ML) has achieved great successes in recent decades, bo...

Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression

Learning-enabled components (LECs) are widely used in cyber-physical sys...

One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy

Despite the great success achieved in machine learning (ML), adversarial...

On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products

Machine learning algorithms increasingly influence our decisions and int...

Adversarial Examples for Deep Learning Cyber Security Analytics

As advances in Deep Neural Networks demonstrate unprecedented levels of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Machine learning (ML) has shown promising performance in many real-world applications, such as image classification (He et al., 2016), speech recognition (Graves et al., 2013), and malware detection (Yuan et al., 2014). In recent years, motivated by the promotion of cutting-edge communication and computational technologies, there is a trend to adopt ML in various cyber-physical system (CPS) applications, such as data center thermal management (Li et al., 2011), agriculture ecosystem management (Dabrowski et al., 2018), power grid attack detection (Ozay et al., 2015)

, and industrial control system anomaly detection

(Kravchik and Shabtai, 2018).

However, recent research has demonstrated that the superficially well-trained ML models are highly vulnerable to adversarial examples (Goodfellow et al., 2014). In particular, adversarial machine learning (AML) technologies enable attackers to deceive ML models with well-crafted adversarial examples by adding small perturbations to legitimate inputs. As CPSs have become synonymous to security-critical infrastructures such as the power grid, nuclear systems, avionics, and transportation systems, such vulnerabilities can be exploited leading to devastating consequences.

Figure 1. A CPS example (power grids).

AML research has received considerable attention in artificial intelligence (AI) communities and it mainly focuses on computational applications such as computer vision. However, it is not applicable to CPSs because the inherent properties of CPSs render the widely-used threat models and AML algorithms in previous research infeasible. In general, the existing AML research makes common assumptions on the attacker’s knowledge and the adversarial examples. The attacker is assumed to have full knowledge of the ML input features and these features are assumed to be mutually independent. For example, in computer vision AML

(Goodfellow et al., 2014)

, the attacker is assumed to be able to know and modify all the values of pixels of an image and there is no strict dependency among the pixels. However, this is not realistic for attacks targeting CPSs. CPSs are usually large and complex systems whose data sources are heterogeneous and geographically distributed. It is not reasonable to assume that the attacker can compromise all data sources and modify their data. Furthermore, for robustness and resilience reasons, CPSs usually employ redundant data sources and incorporate faulty data detection mechanisms. Therefore, the inputs are not only dependent but also subject to the physical constraints of the system. For example, in the power grid, redundant phasor measurement units (PMUs) are deployed in the field to measure frequency and phase angle, and residue-based bad data detection is employed to detect and recover from faulty data for state estimation

(Wood et al., 2013). A simple example is shown in Figure 1. All three meters are measuring the electric current (Ampere) data. If an attacker compromises Meter1, Meter2, and Meter3, no matter what modification the attacker makes to the measurements, the compromised measurement of Meter1 should always be the sum of that of Meter2 and Meter3 due to Kirchhoff’s laws. Otherwise, the crafted measurements will be detected by the bad data detection mechanism and obviously anomalous to the power system operators.

The intrinsic properties of CPS pose stringent requirements for the attackers. In addition to distributed data sources and physical constraints, sensors in real-world CPSs are generally configured to collect data with a specific sampling rate. A valid adversarial attack needs to be finished within the CPS’ sampling period. Therefore, the attacker is now required to overcome limited knowledge of ML inputs, limited access to ML models, limited attack time, and the impact of physical constraints of the underlying systems on the inputs, in order to launch an effective attack that deceives the ML applications deployed in CPSs. Unfortunately, as shown in the paper, this would not have prevented the attacker from generating valid adversarial examples that deviate the ML output.

In this paper, we propose constrained adversarial machine learning (ConAML

), in which the ML inputs are mutually dependent and restricted by the physical constraints. We show that the ML applications in CPSs are susceptible to handcrafted adversarial examples even though such systems naturally pose a greater barrier for the attacker. Specifically, we propose a practical best-effort search algorithm to effectively generate adversarial examples under linear physical constraints which are one of the most common constraints in real-world CPS applications. We implement our algorithms with ML models used in two CPSs and mainly focus on neural networks due to its transferability. Meanwhile, in order to demonstrate the impact of constraints, we assume there is a supreme attacker who has full access to the victim ML model and CPS measurements and employs a state-of-the-art algorithm to launch adversarial attacks without constraints. We set the supreme attacker as a baseline to evaluate the performance of our algorithms. Our main contributions are summarized as follows:

  • We highlight the potential vulnerability of ML applications in CPSs, analyze the different requirements for AML applied in CPSs with regard to the general computational applications, and present a practical threat model for AML in CPSs.

  • We formulate the mathematical model of ConAML by incorporating the physical constraints of the underlying system and demonstrate the potential risks ConAML can bring to CPSs. To the best of our knowledge, this is also the first work that investigates the physical mutual dependency among the ML features in AML research.

  • Based on the practical threat model, we propose a practical best-effort search algorithm to iteratively generate adversarial examples under the linear physical constraints, including linear equality constraints and linear inequality constraints used in different CPSs.

  • As proofs of concept, we assess our algorithms applied to the ML-empowered detection of false data injection attacks in the power grid and anomaly detection in a water treatment system. The evaluation results show that the adversarial examples generated by our algorithms can achieve notable performance even compared with the supreme attackers’.

Related research is discussed in Section 2. We analyze the properties of AML in CPSs and give the mathematical definition and the threat model in Section 3. Section 4 presents the algorithm design. Section 5 and Section 6 investigate the ML models in the power grids and the water treatment systems, and we use these two CPSs as proofs of concept to carry out experiments. Limitations and future work are given in Section 7. Section 8 concludes the paper.

2. Related Work

AML is a technology that enables attackers to deceive ML models with well-crafted adversarial examples. AML was discovered by Szegedy et al. (Szegedy et al., 2013) in 2013. They found that a deep neural network used for image classification can be easily fooled by adding a certain, hardly perceptible perturbation to the legitimate input. Moreover, the same perturbation can cause a different network to misclassify the same input even when the network has a different structure and is trained with a different dataset, which is referred to as the transferability property of adversarial examples in the following research. After that, in 2015, Goodfellow et al. (Goodfellow et al., 2014) proposed the Fast Gradient Sign Method (FGSM), an efficient algorithm to generate adversarial examples. Thereafter, several variants of FGSM were proposed. The Fast Gradient Value (FGV) method proposed by Rozsa et al. (Rozsa et al., 2016) is a simple variant of FGSM, in which the authors utilize the raw gradient instead of the sign. In 2016, Moosavi-Dezfooli et al. presented DeepFool

which searches for the closest distance between the original input to the decision boundary in high dimensional data space and iteratively builds the adversarial examples.

DeepFool can be adapted to binary or multi-class classification tasks and generate smaller perturbations compared with FGSM. According to (Kurakin et al., 2016b), single-step attack methods have better transferability but can be easily defended. Therefore, multi-steps methods, such as iterative methods (Kurakin et al., 2016b) and momentum-based methods (Dong et al., 2018), are presented to enhance the effectiveness of attacks. The above methods generate individual adversarial examples for each legitmate inputs. In 2017, Moosavi-Dezfooli et al. designed universal adversarial perturbations to generate perturbations regardless of the ML model inputs (Moosavi-Dezfooli et al., 2017).

In recent years, research on AML applications continues growing rapidly. Sharif et al. generated adversarial examples to attack a state-of-the-art face-recognition system and achieved a notable result (Sharif et al., 2016). Grossee et al. constructed an effective attack that generated adversarial examples against Android malware detection models (Grosse et al., 2017). In 2017, Jia et al. evaluated the robustness of neural language processing model using the Standford Question Answering Dataset (SQuAD) by adding adversarially inserted sentences, and the result showed that the adversarial sentences could reduce the F1 score from an average of 75% to 36% across sixteen published models (Jia and Liang, 2017). The adversarial attacks that target real-world applications also increase. In 2014, Laskov et al. developed a taxonomy for practical adversarial attacks based on the attackers’ capability and launched evasion attacks to PDFRATE, a real-world online machine learning system to detect malicious PDF malware (Laskov and others, 2014). Followed by Xu et al.

, in 2016, they utilized a genetic programming algorithm to generate evasion adversarial examples to evaluate the robustness of ML classifiers

(Xu et al., 2016). Their methods were evaluated with PDFRATE and Hidost, another PDF malware classifier. In 2018, Li et al. presented TEXTBUGGER

, a framework to effectively generate adversarial text against deep learning-based text understanding (DLTU) systems.

TEXTBUGGER was evaluated with multiple real-world DLTU systems, such as Google Cloud NLP, Microsoft Azure Text Analytics, IBM Watson NLU, and achieved state-of-the-art attack performance (Li et al., 2018).

In addition to pure computation and cyberspace attacks, AML techniques that involve the physical domain are drawing more and more attention. Kurakin et al.

presented that ML models are still vulnerable to adversarial examples in physical world scenarios by feeding a phone camera captured adversarial image to an ImageNet classifier

(Kurakin et al., 2016a). In 2016, Carlini et al. presented hidden voice commands and demonstrated that well-crafted voice commands which are unintelligible to human listeners, can be interpreted as commands by voice controllable systems (Carlini et al., 2016). (Tian et al., 2018) and (Lu et al., 2017) investigated the security of machine learning models used in autonomous driving cars. In 2018, (Ghafouri et al., 2018) showed that an attacker can generate adversarial examples by modifying a portion of measurements in CPSs, and presented an anomaly detection model where each sensor’s reading is predicted as a function of other sensors’ readings. However, they still allowed the attacker to know all the measurements (inputs) and did not consider the constraints among CPS measurements. More related work on adversarial examples, including the generation algorithms and related applications, can be found in (Yuan et al., 2019).

3. ConAML Mathematical Model

3.1. ML-Assisted CPSs

Figure 2. Machine learning-assisted CPS architecture.

Generally, a CPS can be simplified as a system that consists of four parts, namely sensors, actuators, the communication network, and the control center (Chen et al., 2018), as shown in Figure 2. The sensors measure and quantify the data from the physical environment, and send the measurement data to the control center through the communication network. In practice, the raw measurement data will be filtered and processed by the gateway according to the error checking mechanism whose rules are defined by human experts based on the properties of the physical system. Measurement data that violates the physically defined rules will be removed.

In this paper, we consider the scenario that the control center utilizes ML model(s) to make decisions (classification) based on the filtered measurement data from the gateway directly, and the features used to train the ML models are the measurements of sensors respectively. The goal of the attacker will be leading the CPS applications with ML model(s) to output wrong (classification) results without being detected by the gateway by adding perturbations to the measurements of the compromised sensors.

3.2. ConAML Properties

Adversarial attacks can be classified according to the attacker’s capability and attack goals (Laskov and others, 2014) (Yuan et al., 2019) (Chakraborty et al., 2018). Different from general computational domain applications, there are several inherent properties of CPSs that pose specific requirements for adversarial attackers. First, in CPSs, the ML models will generally be placed in cloud servers or control centers that own complete information security mechanisms. Therefore, the attackers usually have no access to ML models and a black-box attack should be considered. According to (Liu et al., 2016), non-targeted adversarial examples generally have better transferability between different ML models. In this paper, we investigate both the non-targeted attacks and targeted attacks and present the corresponding algorithm design. The real-world CPSs, such as the Supervisory Control and Data Acquisition (SCADA), will have a constant measurement sampling rate (frequency) configured for their sensors. The attacker who targets CPSs’ ML applications is then required to generate a valid adversarial example within a measurement sampling period.

To launch adversarial attacks, the attacker is assumed to be able to compromise a certain number of measurements, and can freely eavesdrop and modify the measurement data. In real attack scenarios, this can be implemented by either directly compromising the sensors, such as device intrusion or attacking the communication network, such as man-in-the-middle attacks. Due to the inherent distributed properties of the CPSs’ sensors, the attacker should not know the uncompromised measurements.

The attacker cannot access the victim ML models and the training dataset. However, we assume that the attacker can obtain an alternative dataset that follows similar distribution to train his/her ML models, such as historian data. For example, in some practical situations, the measurement data can be public or accessible by multi-parties, such as the temperature data for the weather forecast, earthquake sensor data, and flood water flow data.

Figure 3. A CPS example (water pipelines).

Last but not least, the attacker is required to generate the adversarial examples that meet the input constraints imposed by the physical systems to bypass the fault detection of the gateway. An example of linear inequality constraint is shown in Figure 3. All the meters in Figure 3 are measuring water flow which follows the arrows’ direction. If an attacker wants to defraud the anomaly detection ML model of a water treatment system by modifying the meters’ readings, the adversarial measurement of Meter1 should always larger than the sum of Meter2 and Meter3 due to the physical structure of the pipelines. Otherwise, the poisoned inputs will be obviously anomalous to the victim (system operator) and detected automatically by the error checking mechanisms. Formally, we propose the mathematical definition of ConAML.

3.3. ConAML Mathematical Definition

3.3.1. Notations

In order to simplify the mathematical representation, we will use

to denote a sampled vector of

according to , where is a vector of sampling index. For example, if and , we have .

We assume there are totally sensors in a CPS, and each sensor’s measurement is a feature of the ML model in the control center. We use and to denote all the sensors and their measurements respectively. The attacker compromised sensors in the CPS and denotes the index vector of the compromised sensors. Obviously, we have and . Meanwhile, the uncompromised sensors’ indexes are denoted as .

is the adversarial perturbation to be added to . However, the attacker can only inject to . The polluted adversarial measurements become , and . Apparently, we have when , , and when . Similarly, the crafted adversarial example is feed into . We have when , and when . All the notations are summarized in Table 1.

Symbol Description

The trained model with hyperparameter

The vector of sensors
The vector of measurements of
The perturbations vector added to
The sum of and . The vector of compromised
input, with poisoned measurements
The vector of the indexes of compromised
sensors or measurements
The vector of the indexes of uncompromised
sensors or measurements
The original class of the measurement
The target class of the measurement
The linear constraint matrix
Table 1. List of Notations

3.3.2. Mathematical Presentation

The linear constraints are very common in real-world CPSs. In this paper, we mainly focus on the linear constraints among the compromised model inputs, including both linear equality constraints and linear inequality constraints. We will briefly discuss the nonlinear equality constraints at the end of Section 4.

For linear equality constraints, we suppose there are constraints of the compromised measurements that the attacker needs to meet, and the constraints can be represented as follow:


The above constraints can be represented as (2). We have , where , and .


In order to deceive the model to make false classification, the attacker needs to generate the perturbation vector and adds it to such that will predict the different output. Meanwhile, the crafted measurements should also meet the constraints in equation (2) to avoid being noticed by the system operator or detected by the error checking mechanism.

Formally, the attacker who launches non-targeted attacks needs to solve the following optimization problem:



is a loss function, and Y is the original class label of the input vector


Accordingly, the attacker who launches targeted attacks needs to minimize the loss function with , where is the target class label. The constraints are the same.

In addition, the linear inequality constraints among the compromised measurements can be represented as equation (4), and the constrained optimization problem to be solved is also similar to (3) but replacing (3c) with and (3d) with respectively. This works for both non-targeted attacks and targeted attacks.


3.4. Threat Model

Based on the above definition, we propose the general threat model of the constrained adversarial attack towards CPSs.

  • We assume the attacker has no access to the trained model in the control center, including the hyperparameters and the dataset used to train . However, we will allow the attacker to have an alternative dataset that follows a similar distribution to train his/her ML models.

  • The attacker is assumed to know the structure of the targeted CPS, including the number of sensors, the sampling frequency of sensors and the physical meaning of the measurement data. In practical scenarios, this can be obtained by eavesdropping the regulated measurement data since generally, the data packet will contain the identity information of the corresponding sensor.

  • The attacker will be able to compromise measurements in the CPS and make modifications to the compromised measurement data . Meanwhile, the uncompromised measurements are unknown to the attacker.

  • We assume the attacker knows the linear constraints of the measurements imposed by the physical system.

Readers may think that allowing the attacker to know the constraints seems unreasonable. However, in practice, if the attacker collected and observed the compromised stream measurement data for a relatively long time, it is not difficult for the attacker to find out the underlying constraints among the measurements through data analysis and parameter estimation methods. Therefore, the attacker will be able learn to the underlying constraints even if the structure of the CPS changes. Meanwhile, the attacks that target critical infrastructure can be nationwide and the attackers usually have comprehensive resources. For example, the cyber attack on the Ukraine power grid in 2015 involved very complex and resource-intensive processes before the attack actually launched (Liang et al., 2016). Therefore, the attack scenario described in the threat model can be practical in many circumstances.

4. Design of ConAML

4.1. Linear Equality Constraints Analysis

As shown in (Goodfellow et al., 2014) and (Rozsa et al., 2016), the fundamental philosophy of non-targeted and targeted AML can be represented as (5a) and (5b) respectively.


However, directly following the gradient will not guarantee the adversarial examples meet the constraints. As we discussed above, with the constraints imposed by the physical system, the attacker is no longer able to freely add perturbation to original input using the raw gradient of the input vector. In this subsection, we will analyze how the linear equality constraints will affect the way to generate perturbation and use a simple example for illustration.

Under the threat model proposed in section 3.4, the constraint of (3c) is always met due to the properties of the physical systems. We then consider the constraint (3d).

Theorem 4.1 ().

The sufficient and necessary condition to meet constraint (3d) is .


The proof of Theorem 4.1 is quite straightforward. If we replace in equation (3d) with equation (3b), we can get . From equation (3c) we can learn that . Therefore, we have and prove Theorem 4.1. ∎

From Theorem 4.1 we can also derive a very useful corollary, as shown below.

Corollary 4.2 ().

If , , …, are valid perturbation vectors that follow the constraints, then we have is also a valid perturbation for the constraint .


We have . Since is a valid perturbation vector and , we have and prove Corollary 4.2. ∎

Theorem 4.1 indicates that the perturbation vector to be added to the original measurements must be a solution of the homogeneous linear equations . However, is this condition always met? We present Theorem 4.3 to answer this question, and the proof can be found in Appendix A.

Theorem 4.3 ().

In practical scenarios, the attacker can always find a valid solution (perturbation) that meets the linear equality constraints imposed by the physical systems.

Figure 4. ConAML one-step illustration (linear equality).

We utilize a simplified example to illustrate how the constraints will affect the generation of perturbations. We consider a simple ML model that only has two dimensions inputs with a loss function . Meanwhile, we suppose the input measurements and need to meet the linear constraints and the current measurement vector , as shown in Figure 4. According to (5), measurement should move a small step (perturbation) to the gradient direction (direction 1 in Figure 4) to increase the loss most rapidly. However, as shown by the contour lines in Figure 4, the measurement is always forced to be on the straight line , which is the projection of the intersection of the two surfaces. Accordingly, instead of following the raw gradient, should move forward to direction 2 to increase the loss. Therefore, although at a relatively slow rate, it is still possible for the attacker to increase the loss under the constraints.

4.2. Linear Equality Constraint Adversarial Example Generation

The common method of solving optimization problems using gradient descent under constraints is projected gradient descent (PGD). However, since neural networks are generally not considered as convex functions (Choromanska et al., 2014), PGD cannot be used to generate adversarial examples directly. We propose the design of a simple but effective search algorithm to generate the adversarial examples under physical linear equality constraints.

As discussed in section 4.1, the perturbation needs to be a solution of . We use to denote the rank of the matrix , where . It is obvious that the solution set of homogeneous linear equation will have basic solution vectors. We use to denote the index of independent variables in the solution set, to denote the index of corresponding dependent variables, and to denote the linear dependency matrix of and . Clearly, we have . An example of generating , and can be found in Appendix B. For convenience, we will use to describe the process of getting , , from matrix .

We assume the attacker can obtain an alternative dataset or generate his/her own dataset to train a local model . The underlying assumption of the effectiveness of our approach is the transferability property of the neural networks.

4.2.1. Non-targeted Attack

To make the paper clear, we use a top-down order to describe the searching algorithms.

We first deal with the challenge of the attacker’s limited knowledge on the uncompromised measurements . This challenge is difficult to tackle since the complete measurement vector is needed to obtain the gradient values according to (5). In 2017, Moosavi-Dezfooli et al. proposed the universal adversarial perturbation scheme which generates image-agnostic adversarial perturbation vectors (Moosavi-Dezfooli et al., 2017)

. The identical universal adversarial perturbation vector can cause different images to be misclassified by the state-of-the-art ML-based image classifiers with high probability. The basic philosophy of

(Moosavi-Dezfooli et al., 2017) is to iteratively and incrementally build a perturbation vector that can misclassify a set of images sampled from the whole dataset.

Inspired by their approach, we now present our algorithm which does not allow the attacker to know . We define an ordered set of sampled uncompromised measurements , and use to denote the crafted measurement vector from and the sampled uncompromised measurement vector . Here, is a crafted measurement vector with . The uncompromised measurement vectors in can be randomly selected from the attacker’s alternative dataset.

1Input: , , , , , ,
2 Output:
3 function
4       initialize
5       build set
6       set counter
7       while sampleEva  do
8             if  then
9                   break
11             end if
12            for  in  do
13                   = genEqExp
15             end for
16            ++
18       end while
19      return
20 end function
Algorithm 1 Best-Effort Non-Targeted Adversarial Example Search (Linear Equality)

Algorithm 1 describes the high-level approach to generate adversarial perturbations regardless of uncompromised measurements. The function eqExpSearch takes the attacker’s ML model , the set of sampled uncompromised measurements , the compromised measurement vector , two constants and , the original label , and a constant set as inputs, and outputs the adversarial example for the real-time .

The algorithm first builds a set of crafted measurement vector based on and , and then starts an iteration over . The purpose is to find a universal that can cause a portion of the vectors in misclassified by . is a constant chosen by the attacker to determine the attack success rate in according to . The function sampleEva evaluates and with the ML model and returns the classification accuracy, as shown in Algorithm 2. During each searching iteration, the algorithm builds and maintains the perturbation increasingly. As shown in line 12 in Algorithm 1, the function genEqExp launches a multi-step searching process until a successful perturbation is found for . The detailed design of function genEqExp is presented in Algorithm 3.

1Input: , , ,
2 Output: Classification Accuracy
3 function
4       add perturbation to all vectors in
5       evaluate with and label
6       return the classification accuracy of
8 end function
Algorithm 2 Sample Evaluation

Figure 5 presents a simple illustration of the iteration process in Algorithm 1. We have three sensors with compromised by the attacker. The yellow, green and orange shallow areas in the plane represent the possible adversarial examples of the crafted measurement vector , , and , respectively. The initial point (red ) iterates twice ( and ) and finally reaches with the universal perturbation vector . Therefore, is a valid adversarial example for all .

Figure 5. Iteration illustration.

In practice, it is possible that the iteration process will never stop. Therefore, the attacker will need to set the maximum number of iterations and search steps to launch an attack within a sampling period of the CPS. We let the attacker select two constants defined by in Algorithm 1. is an integer to determine the traverse rounds of while is the maximum number of search steps, which is shown in Algorithm 3. It is important to note that the set of constants determines the tradeoff between the attack’s transferability and effectiveness, and thus it should be carefully tuned to meet the practical attack requirements.

Comparison of Methods: Our approach is different from (Moosavi-Dezfooli et al., 2017) in several aspects. First, the approach proposed in (Moosavi-Dezfooli et al., 2017) has identical adversarial perturbations for different ML inputs while our approach actually generates distinct perturbations for each . Second, the approach in (Moosavi-Dezfooli et al., 2017) builds universal perturbations regardless of the real-time ML inputs. However, as the attacker has already compromised a portion of measurements, it is more effective to take advantage of the obtained knowledge. In other words, our perturbations are ‘universal’ for but ‘distinct’ for . Finally, the intrinsic properties of CPSs require the attacker to generate a valid adversarial example within a sampling period while there is no enforced limitation of the iteration time in (Moosavi-Dezfooli et al., 2017).

1Input: , , , , , , ,
2 Output:
3 function
4       initialize
5       initialize
6       while   do
7             if  doesn’t equals  then
8                   return
9             end if
11             update
14       end while
15      return
16 end function
Algorithm 3 Multi-Steps Non-Targeted Perturbation Search (Linear Equality)

As shown in (Kurakin et al., 2016b)

, single-step attacks usually have better transferability but can also be easily defended. Moreover, due to the large variance of the measurements in practical CPSs, single-step attacks may not be powerful enough for the attackers. We now introduce the design of

genEqExp, which is a multi-step algorithm to find the valid adversarial perturbations, as shown in Algorithm 3.

The function genEqExp takes as an input and outputs a valid perturbation for . Algorithm 3 keeps executing eqOneStep for multiple times defined by to generate a valid increasingly. Function eqOneStep performs a single-step attack for the input vector and returns a one-step perturbation that matches the constraints defined by , which is shown in Algorithm 4. Due to Corollary 4.2, and will also follow the constraints. To decrease the iteration time, similar to (Moosavi-Dezfooli et al., 2016), the algorithm will return the crafted adversarial examples immediately as long as misclassifies the input measurement vector , as shown by Line 7 in Algorithm 3.

1Input: , , , , ,
2 Output:
3 function
4       calculate gradient vector
5       define
6       obtain tuple
7       update in
9       return
10 end function
Algorithm 4 One Step Attack Constraint

The philosophy of function eqOneStep is very straightforward. From the constraint Matrix , we can get the independent variables , dependent variables and the dependency matrix between them. We will simply keep the gradient values of and use them to compute the corresponding values of (Line 7) so that the final output perturbation will follow . It is worth noting that the variance of the gradient vector in Algorithm 4 can be large in practical training, therefore, the constant factor defines the largest modification the attacker can make to a specific measurement to control the search speed.

4.2.2. Targeted Attack

The design of the targeted attack algorithms is very similar to the non-targeted attack. Specifically, the non-targeted attack makes the following modifications to the above Algorithms:

  • In Algorithm 1, we set the targeted label as the input instead of . Therefore, is delivered to function sampleEva and genEqExp as the input label. Meanwhile, we also need to replace the while loop in Line 7 with condition .

  • In Algorithm 3, we replace condition statement in Line 7 with equals , and modify Line 11 to . Since we already set as the input in Algorithm 1, we can keep the notation in Algorithm 3 without making any changes.

We implement both the non-targeted attack and the targeted attack and present their performance in Section 5 ad 6.

4.3. Linear Inequality Constraint Adversarial Example Generation

Linear inequality constraints are very in real-world CPS applications, like the water flow constraints in Figure 3. Due to measurement noise, real-world systems usually tolerate distinctions between measurements and expectation values as long as the distinctions are smaller than predefined thresholds, which also brings inequality constraints to data. Meanwhile, a linear equality constraint can be represented by two linear inequality constraints. As shown in (4), linear inequality constraints define the valid measurement subspace whose boundary hyperplanes are defined by (2). In general, the search process under linear inequality constraints can be categorized into two situations. The first situation is when a point (measurement vector) is in the subspace and meets all constraints, while the second situation happens when the point reaches boundaries. In this subsection, we will directly introduce the algorithms of non-targeted adversarial attacks under linear inequality constraints, and the methods to solve the uncompromised measurement problem and targeted attack are similar to the last subsection.

1Input: , , , ,
2 Output:
3 function
4       calculate gradient vector
5       set elements in to zero
7       return
8 end function
Algorithm 5 Non-Constraint Perturbation

Due to the property of physical systems, the original point will naturally meet all the constraints. As shown in Algorithm 6, to increase the loss, the original point will first try to move a step following the gradient direction. After that, the new point is checked with (4) to find if all inequality constraints are met. If all constraints were met, the moved step was valid and we can update . If violates some constraints in , we will take all the violated constraints and make a real-time constraint matrix , where is the index vector of violated constraints. We now convert the inequality constraint problem to the equality constraint problem with the new constraint matrix and the original point . will then try to take a step using the same method described in Algorithm 4 with the new constraint matrix . Again, we check whether the new reached point meets all the constraints. If there are still violated constraints, we extend with the new violated constraints. The search process repeats until reaching a valid that meets all the constraints. For simplicity, we will use to denote the checking process of a single search in one step movement, where is the index vector of the violated constraints in the search.

1Input: , , , , , , , , ,
2 Output:
3 function
4       initialize =
5       initialize =
6       initialize =
7       initialize as empty // violated constrain index
8       while   do
9             if  doesn’t equals  then
10                   break
11             end if
13             if  is empty then
14                   =
17                   reset to empty
19            else
20                   extend with
21                   define // real-time constraints
25             end if
28       end while
29      return
30 end function
Algorithm 6 Multi-Steps Non-Targeted Perturbation Search (Linear Inequality)

Function freeStep performs a single-step attack without constraints. Algorithm 5 is very similar to the FGM algorithm (Rozsa et al., 2016) but no perturbation is added to , namely , which is similar to the saliency map function used in (Papernot et al., 2016).

Figure 6. ConAML iteration illustration (linear inequality).

A simple example is shown in Figure 6. Similar to Figure 4, we have the loss function with inequality constraints . To increase the loss, the initial point will take a small step following the gradient direction and reach point . Since meets the constraints, it is a valid point. After that, will move a step following the gradient direction and reach point . However, point violates the constraint and the movement is not valid. As we have point is valid, we construct a linear equality constraint problem with constraint which is parallel to . With constraint , point will move a step to point which is also a valid point. Point then repeats the search process and increases the loss gradually. The real-time equality constraint is only used once. When a new valid point is reached, it empties the previous equality constraints and tries the gradient direction first.

4.4. Extension

When the constraints are nonlinear, the single-step attack becomes complex. In general, similar to linear constraints, the nonlinear constraints of the compromised measurements can be represented as (6), where is a nonlinear function of .


We now investigate a special case of the nonlinear constraints. If there exists a subset of the compromised measurements, in which each measurement can be represented as an explicit function of the measurements in the complement set, the attacker will also be able to generate the perturbation accordingly. We use to denote the index vector of the former measurement set, and use to denote the index vector of the complement set. We can then represent (6) as (7), where is a vector of explicit functions.


Apparently, the roles of and in (7) are similar to the and in linear constraints correspondingly. Instead of a linear matrix, the function set represents the dependency between and . The nonlinear constraints make properties such as Theorem 1 infeasible. To meet the constraints, the attacker needs to find the perturbation first and obtain by adding it to . After that, the attacker can compute .

However, the above case of nonlinear constraints is too special and may not be scalable to various practical applications. We regard the nonlinear constraints as an open problem and encourage the related research communities to investigate other nonlinear constraints in different real-world CPS applications.

5. Equality Constraint Case Study: Power Grid

5.1. Background: State Estimation and FDIA

Power grids are critical infrastructures that connect power generation to end customers through transmission and distribution grids. In recent decades, the rapid development of technologies in sensors, communication, and computing enables various applications in the power grid. However, as the power system becomes more complex and dependent on the information and communications technology, the threat of cyber-attacks also increases, and the cyber-power system becomes more vulnerable (Ten et al., 2008) (Tong, 2015). The cyberattack to the Ukraine power grid in 2015 is a well-known example (Liang et al., 2016).


State estimation is a backbone of various crucial applications in power system control that has been enabled by large scale sensing and communication technologies, such as supervisory control and data acquisition (SCADA). Generally speaking, the state estimation is used to estimate the state of each bus, such as voltage angles and magnitudes, in the power grid through analyzing other measurements. A DC model of state estimation can be represented as (8), where z is the state, x is the measurement, and is a matrix that determined by the topology, physical parameters and configurations of the power grid. Due to possible meter instability and cyber attacks, bad measurements e may be introduced to z. To solve this, the power system employs a linear residual-based detection scheme to remove the error measurements (Monticelli, 2012). However, in 2011, Liu et al. proposed the false data injection attack (FDIA) that can bypass the residual-based detection scheme and finally pollute the result of state estimation (Liu et al., 2011). In particular, if the attacker knows H, she/he could construct a faulty vector a that meets the linear constraint , where , and the crafted faulty measurements will not be detected by the system. A detailed introduction of state estimation, residual-based error detection, and FDIA can be found in Appendix C.1.

As FDIA presented a serious threat to the power grid security, many detection and mitigation schemes to defend FDIA are proposed, including strategical measurement protection (Bi and Zhang, 2011) and PMU-based protection (Yang et al., 2017). In recent years, detection schemes based on ML have been proposed and become popular. In 2016, Ozay et al. investigated the detection performance of the traditional ML algorithm, such as SVM (Ozay et al., 2015). After that, deep learning-based models, such as artificial neural networks (ANN) (He et al., 2017)

and convolutional neural network (CNN)

(Niu et al., 2019), are also employed to detect FDIA. However, in this section, we will demonstrate that ML approaches are vulnerable to ConAML. With the well-crafted adversarial perturbations, the polluted measurements from the attacker can not only pass the traditional residual-based approach but also significantly decrease the detection accuracy of the ML schemes.

5.2. Experiment Design

In our experiment, the goal of the attacker is to implement a false negative attack that makes the polluted measurements pass the detection of the ML models, namely to fool the models to misclassify the false measurements as normal. The design of the experiment can be summarized as follows:

  • First, we generate two training datasets based on a simulated power grid system, with half of the data records in each dataset being polluted measurement vectors that meet the constraints .

  • Second, we will train two neural networks with different structures to classify the input measurements using the two training datasets accordingly.

  • We assume there are 15 measurements compromised by the attacker. A separated test dataset that only contains polluted measurements will be generated. We will evaluate the defender’s detection performance with the test datasets.

  • Finally, we launch a black-box attack by crafting adversarial examples of the test datasets based on our searching algorithms. Again, we will use the defender’s model to detect the crafted false measurements.

Figure 7. IEEE 10-machine 39-bus system (Pai et al., 1989).

We select the IEEE standard 10-machine 39-bus system as the power grid system and utilize the MATPOWER (Zimmerman et al., 2010)

library for simulation and generating related datasets. We implement our algorithms with Tensorflow and Keras library. A detailed description of our experiment implementation, including the data simulation, FDIA, and the ML models can be found in

Appendix C.2.

5.3. Evaluation

We select three metrics to evaluate the performance of the attack. The first metric will be the decrease of the defender’s ML detection accuracy (in percentage). The second metric is the magnitude of the noise injected to the legitimate measurement. The attacker will need the adversarial examples to bypass the detection while maintaining their malicious behavior. A small bad noise will violate the attack’s original intention even if it can bypass the detection. In this experiment, we select the -Norm of the valid noise vector as the second metric to compare the magnitude of the malicious injected data. Finally, as the attack needs to be finished within a sampling period of the CPS, we will compare the time cost of the adversarial example generation.

As a baseline, we assume there is a supreme attacker who has full access to the CPS ML model without considering any constraints. The supreme attacker utilizes the methods proposed in (Rozsa et al., 2016) to generate adversarial examples. We compare the performance of our attack with the supreme attack to demonstrate the impact of the constraints and the attacker’s limited knowledge on the victim’s model and measurements.

5.3.1. Accuracy

We test the performance of both the non-targeted attack and targeted attack in terms of the decrease of the detection accuracy, as shown in Figure 8. From Figure 8, we can learn that although the largest accuracy decrease of the ConAML attack is smaller than the supreme attacker, our methods can still achieve considerable attack performance. In our experiment, the black-box attack can cause an over 65% decrease of the defender’s detection accuracy, which indicates that the generated adversarial examples maintain a very good transferability. Meanwhile, the performance of the non-targeted attack and targeted attack in the binary classification problem is also very similar.

Figure 8. False measurement detection accuracy decrease (in percentage) when , , .

5.3.2. Noise Magnitude

As mentioned above, the valid noise injected to the legitimate measurements is important for the attack’s effectiveness. We denote the legitimate measurements of the compromised sensors as , and the attacker crafts the attack vector . After that, the attacker generates an adversarial example () which successfully bypasses the defender’s attack detection model. We then have the valid bad noised .

Figure 9. Magnitude of injected noise when , , .

In our experiment, we evaluate the magnitude of the valid injected bad noise with the average -Norm of all , and the result is shown in Figure 9. The range of the black-box attack is from around 200 to 300. The injected noise is smaller compared with the supreme attacker but is still considerable noise compared with the original measurements.

5.3.3. Attack Time Cost

Figure 10. Average time cost of each adversarial attack in milliseconds, with , , .

In our experiment, the supreme attacker can generate an adversarial example within 10 milliseconds, and the time cost of ConAML attacks can be found in Figure 10. From Figure 10 and Figure 8 we can learn that an effective adversarial example that has a high possibility to bypass detection can be built in around 50 milliseconds, which is efficient enough for many CPS applications in practice. For example, the sampling period of the traditional SCADA system used in the electrical power system is 2 to 4 seconds, while the recently developed PMU has a sampling frequency that ranges from 10Hz to 60Hz. With the possible optimization and upgrade in software and hardware, the time cost can be further reduced.

6. Inequality Constraint Case Study: Water Treatment System

6.1. Background: SWaT Dataset

In this section, we study the linear inequality constraints based on the Secure Water Treatment (SWaT) proposed in (Goh et al., 2016). SWaT is a scaled-down system but with fully operational water treatment functions. The testbed has six main processes and consists of cyber control (PLCs) and physical components of the water treatment facility. The SWaT dataset, generated by the SWaT testbed, is a public dataset to investigate the cyber attacks on CPSs. The raw dataset has 946,722 samples with each sample comprised of 51 attributes, including the measurements of 25 sensors and the states of 26 actuators. Each sample in the dataset was labeled with normal or attack. (Goh et al., 2016) investigated four kinds of attacks based on the number of attack points and places. The detailed description of the SWaT dataset can be found in (Goh et al., 2016) and (Labs, 2019).

The SWaT dataset is an important resource to study CPS security. In 2017, Inoue et al.

used unsupervised machine learning, including Long Short-Term Memory (LSTM) and SVM, to perform anomaly detection based on the SWaT dataset

(Inoue et al., 2017). By comparison, Kravchik et al. employed Convolutional Neural Networks (CNN) and achieved a better false positive rate (Kravchik and Shabtai, 2018). In 2019, (Feng et al., 2019) proposed a data-driven framework to derive invariant rules for anomaly detection for CPS and utilized SWaT to evaluate their approach. Other research related to the SWaT dataset can be found in (Chen et al., 2018) (Feng et al., 2017) (Ahmed et al., 2018).

6.2. Experimental Design and Implementation

Symbol Description Unit
LIT Level Indication Transmitter
FIT Flow Indication Transmitter
AIT Analyzer Indication Transmitter
PIT Pressure Indication Transmitter
DPIT Differential Pressure Ind Transmitter

Table 2. SWaT Analog Components

As shown in Table 2, the SWaT dataset includes the measurements from five kinds of analog components. We examined the SWaT testbed structure and find out that there should be linear inequality constraints among the FIT measurements. We then checked the SWaT dataset and the normal examples in the dataset verified our find. The linear inequality constraints of the FIT measurements are defined by the structure of the water pipelines and the placement of the sensors. In our experiment, we assume the attacker compromised 7 FIT measurements. The specific compromised measurements and the corresponding constraint matrix can be found in Appendix D.1.

As the SWaT dataset does not contain the attacks we proposed in this paper, we generate new datasets from the raw SWaT dataset by adding bad noise that follows the inequality constraints. Again, we utilize two separate datasets to train ML models for the defender and the attacker respectively. Another test dataset that only contains bad records is used for algorithm evaluation. We finally have 25 measurements as features for training with 7 measurements compromised by the attacker. Appendix D.2 gives a detailed description of the experiment implementation.

6.3. Evaluation

We evaluate the performance of the ConAML attack in the water treatment system with the same three metrics used in the power study case. In our experiment, the supreme attacker can achieve an over 70% accuracy decrease, 0.8 noise magnitude, and around 20 milliseconds time cost. The performance of ConAML attacks are shown in Figure 11 to 13.

Figure 11. False measurement detection accuracy decrease (in percentage) when , , .

Figure 12. Magnitude of injected noise. (, , .

Figure 13. Average time cost of each adversarial attack in milliseconds when , , , and .

Overall, the ConAML attack can perform over 30% detection accuracy decrease in both the non-targeted attack and the targeted attack. Meanwhile, the valid bad noise of the two attacks is very close. The largest valid bad noise is around 0.55, which is also a significant noise compared with the normal measurement in the dataset. Similar to the linear constraint study case, the attack time cost increases when more search steps were taken. From Figure 11 and Figure 13, an effective example can be generated within 100 milliseconds ().

7. Limitations and Future Work

As we mentioned in Section 1, in this paper, we mainly investigate the linear constraints of input measurements in CPSs and neural network-based ML algorithms. In the future, research on ConAML of nonlinear constraints and other general ML algorithms, such as SVM, KNN will be proposed. We encourage related communities to present different CPSs that require special constraints. Meanwhile, the constraint matrix derived by the attacker may not be exactly same as real constraints. In the future work, we will also investigate the scenario that the attacker is only allowed to have partial knowledge of the constraint matrix.

As summarized in (Yuan et al., 2019), defense mechanisms like adversarial re-training and adversarial detecting can increase the robustness of neural networks and are likely to mitigate ConAML attacks. However, most defenses in previous research target adversarial examples in computer vision tasks. In future work, we will study the state-of-the-art defense mechanisms in previous research and evaluate their performance with adversarial examples generated by ConAML. We will also investigate the defense mechanisms which take advantage of the properties of physical systems directly, such as the best deployment of sensors that will make the attackers’ constraint more stringent.

8. Conclusion

The potential vulnerability of ML applications in CPSs need to be concerned. In this paper, we investigate the input constraints of AML algorithms in CPSs. We analyze the difference of adversarial examples between CPS and computational applications, like computer vision, and give the formal threat model of AML in CPS. We propose the best-effort search algorithms to effectively generate the adversarial examples that meet the linear constraints. Finally, as proofs of concept, we study the vulnerabilities of ML models used in FDIA in power grids and anomaly detection in water treatment systems. The evaluation results show that even with the constraints imposed by the physical systems, our approach can still effectively generate the adversarial examples that will significantly decrease the detection accuracy of the defender’s ML models.


  • C. M. Ahmed, J. Zhou, and A. P. Mathur (2018) Noise matters: using sensor and process noise fingerprint to detect stealthy cyber attacks and authenticate sensors in cps. In Proceedings of the 34th Annual Computer Security Applications Conference, pp. 566–581. Cited by: §6.1.
  • S. Bi and Y. J. Zhang (2011) Defending mechanisms against false-data injection attacks in the power system state estimation. In 2011 IEEE GLOBECOM Workshops (GC Wkshps), pp. 1162–1167. Cited by: §5.1.
  • N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou (2016) Hidden voice commands. In 25th USENIX Security Symposium (USENIX Security 16), pp. 513–530. Cited by: §2.
  • A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay (2018) Adversarial attacks and defences: a survey. arXiv preprint arXiv:1810.00069. Cited by: §3.2.
  • Y. Chen, C. M. Poskitt, and J. Sun (2018) Learning from mutants: using code mutation to learn and monitor invariants of a cyber-physical system. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 648–660. Cited by: §3.1, §6.1.
  • A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun (2014) The loss surface of multilayer networks. CoRR abs/1412.0233. External Links: Link, 1412.0233 Cited by: §4.2.
  • J. J. Dabrowski, A. Rahman, A. George, S. Arnold, and J. McCulloch (2018) State space models for forecasting water quality variables: an application in aquaculture prawn farming. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 177–185. Cited by: §1.
  • Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li (2018) Boosting adversarial attacks with momentum. In Proceedings of the IEEE CVPR, pp. 9185–9193. Cited by: §2.
  • C. Feng, T. Li, Z. Zhu, and D. Chana (2017) A deep learning-based framework for conducting stealthy attacks in industrial control systems. arXiv preprint arXiv:1709.06397. Cited by: §6.1.
  • C. Feng, V. R. Palleti, A. Mathur, and D. Chana (2019) A systematic framework to generate invariants for anomaly detection in industrial control systems.. In NDSS, Cited by: §6.1.
  • A. Ghafouri, Y. Vorobeychik, and X. Koutsoukos (2018) Adversarial regression for detecting attacks in cyber-physical systems. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 3769–3775. External Links: Document, Link Cited by: §2.
  • J. Goh, S. Adepu, K. N. Junejo, and A. Mathur (2016) A dataset to support research in the design of secure water treatment systems. In International Conference on Critical Information Infrastructures Security, pp. 88–99. Cited by: §D.1, §6.1.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §1, §2, §4.1.
  • A. Graves, A. Mohamed, and G. Hinton (2013)

    Speech recognition with deep recurrent neural networks

    In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6645–6649. Cited by: §1.
  • K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel (2017) Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pp. 62–79. Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: §1.
  • Y. He, G. J. Mendis, and J. Wei (2017) Real-time detection of false data injection attacks in smart grid: a deep learning-based intelligent mechanism. IEEE Transactions on Smart Grid 8 (5), pp. 2505–2516. Cited by: §5.1.
  • J. Inoue, Y. Yamagata, Y. Chen, C. M. Poskitt, and J. Sun (2017) Anomaly detection for a water treatment system using unsupervised machine learning. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 1058–1065. Cited by: §6.1.
  • R. Jia and P. Liang (2017) Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328. Cited by: §2.
  • M. Kravchik and A. Shabtai (2018) Detecting cyber attacks in industrial control systems using convolutional neural networks. In Proceedings of the 2018 Workshop on Cyber-Physical Systems Security and PrivaCy, pp. 72–83. Cited by: §1, §6.1.
  • A. Kurakin, I. Goodfellow, and S. Bengio (2016a) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §2.
  • A. Kurakin, I. Goodfellow, and S. Bengio (2016b) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §2, §4.2.1.
  • I. Labs (2019) Secure Water Treatment (SWaT) Dataset. Note:[Online; accessed 15-08-2019] Cited by: §6.1.
  • P. Laskov et al. (2014) Practical evasion of a learning-based classifier: a case study. In 2014 IEEE symposium on security and privacy, pp. 197–211. Cited by: §2, §3.2.
  • J. Li, S. Ji, T. Du, B. Li, and T. Wang (2018) TextBugger: generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271. Cited by: §2.
  • L. Li, C. M. Liang, J. Liu, S. Nath, A. Terzis, and C. Faloutsos (2011) ThermoCast: a cyber-physical forecasting model for datacenters. In Proceedings of the 17th ACM SIGKDD, KDD ’11, pp. 1370–1378. External Links: ISBN 978-1-4503-0813-7 Cited by: §1.
  • G. Liang, S. R. Weller, J. Zhao, F. Luo, and Z. Y. Dong (2016) The 2015 ukraine blackout: implications for false data injection attacks. IEEE Transactions on Power Systems 32 (4), pp. 3317–3318. Cited by: §3.4, §5.1.
  • Y. Liu, X. Chen, C. Liu, and D. Song (2016) Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770. Cited by: §3.2.
  • Y. Liu, P. Ning, and M. K. Reiter (2011) False data injection attacks against state estimation in electric power grids. ACM Transactions on Information and System Security (TISSEC) 14 (1), pp. 13. Cited by: §C.1, §C.1, §C.1, §C.1, §5.1.
  • J. Lu, H. Sibai, E. Fabry, and D. Forsyth (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501. Cited by: §2.
  • A. Monticelli (2012) State estimation in electric power systems: a generalized approach. Springer Science & Business Media. Cited by: §C.1, §5.1.
  • S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) DeepFool: a simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 2574–2582. External Links: Document, ISSN Cited by: §4.2.1.
  • S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard (2017) Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773. Cited by: §2, §4.2.1, §4.2.1.
  • X. Niu, J. Li, J. Sun, and K. Tomsovic (2019) Dynamic detection of false data injection attack in smart grid using deep learning. In 2019 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT), Vol. , pp. 1–6. External Links: Document, ISSN Cited by: §5.1.
  • M. Ozay, I. Esnaola, F. T. Y. Vural, S. R. Kulkarni, and H. V. Poor (2015) Machine learning methods for attack detection in the smart grid. IEEE transactions on neural networks and learning systems 27 (8), pp. 1773–1786. Cited by: §1, §5.1.
  • M. Pai, T. Athay, R. Podmore, and S. Virmani (1989) IEEE 39-bus system. Cited by: Figure 7.
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. Cited by: §4.3.
  • A. Rozsa, E. M. Rudd, and T. E. Boult (2016) Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32. Cited by: §2, §4.1, §4.3, §5.3.
  • M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter (2016)

    Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition

    In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. Cited by: §2.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §2.
  • C. Ten, C. Liu, and G. Manimaran (2008) Vulnerability assessment of cybersecurity for scada systems. IEEE Transactions on Power Systems 23 (4), pp. 1836–1846. Cited by: §5.1.
  • Y. Tian, K. Pei, S. Jana, and B. Ray (2018) Deeptest: automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering, pp. 303–314. Cited by: §2.
  • Y. Tong (2015) Data security and privacy in smart grid. Cited by: §5.1.
  • A. J. Wood, B. F. Wollenberg, and G. B. Sheblé (2013) Power generation, operation, and control. John Wiley & Sons. Cited by: §1.
  • W. Xu, Y. Qi, and D. Evans (2016) Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium, pp. 21–24. Cited by: §2.
  • Q. Yang, D. An, R. Min, W. Yu, X. Yang, and W. Zhao (2017) On optimal pmu placement-based defense against data integrity attacks in smart grid. IEEE Transactions on Information Forensics and Security 12 (7), pp. 1735–1750. Cited by: §5.1.
  • X. Yuan, P. He, Q. Zhu, and X. Li (2019) Adversarial examples: attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems. Cited by: §2, §3.2, §7.
  • Z. Yuan, Y. Lu, Z. Wang, and Y. Xue (2014) Droid-sec: deep learning in android malware detection. In ACM SIGCOMM Computer Communication Review, Vol. 44, pp. 371–372. Cited by: §1.
  • R. D. Zimmerman, C. E. Murillo-Sánchez, and R. J. Thomas (2010) MATPOWER: steady-state operations, planning, and analysis tools for power systems research and education. IEEE Transactions on power systems 26 (1), pp. 12–19. Cited by: §C.2, §5.2.

Appendix A Proof of Theorem 4.3


Due to the intrinsic property of the targeted system, equation (3c) is naturally met, which indicates that there is always a solution for the nonhomogeneous linear equations . Accordingly, we have . Moreover, if , there will be one unique solution for equation (3c), which means the measurements of compromised sensors are constant. The constant measurements are contradictory to the purpose of deploying CPSs. In practical scenarios, is changing over time, so that and the homogeneous linear equation will have infinite solutions. Therefore, the attacker can always build a valid adversarial example that meets the constraints. ∎

Appendix B Dependent Matrix Example

Suppose we have and is the constraint matrix, as shown below, we will have the , and .

Appendix C Power System Case Study

c.1. State Estimation and FDIA

We will give the mathematical description of state estimation and how a false data injection attack (FDIA) can be launched. To be clear, we will employ the widely used notations in related research publications to denote the variables; the related explanation will also be given to avoid confusion.

In general, the AC power flow measurement state estimation model can be represented as follow:


where h is a function of x, x is the state variables, z is the measurements, and e is the measurement errors. The task of state estimation is to find an estimated that best fits z of (9). In practical application, a DC measurement model is also used to decrease the process time and (9) can then be represented as follow:


where is a matrix that determined by the topology, physical parameters and configurations of the power grid.

Typically, if a weighted least squares estimation scheme is used, the system state variable vector can be obtained through (11):


where W is the covariance matrix of the variances of meter errors.

Due to possible meter instability and cyber attacks, bad measurements may be introduced to the measurement vector z. To solve this, various bad measurement detection methods are proposed (Monticelli, 2012). One commonly used detection approach is to calculate the measurement residual between the raw measurement z and derived measurements H. If the -norm , where is a threshold selected according to the false alarm rate, the measurement z will be considered as a bad measurement.

In 2011, Liu et al. proposed the false data injection attack (FDIA) that can bypass the detection scheme described above and pollute the result of state estimation (Liu et al., 2011). FDIA assumes that the attacker knows the topology and configuration information H of the power system. Let denote the compromised measurement vector that is observed by the state estimation, where a is the malicious data added by the attacker. Thereafter, let denote the polluted state that is estimated by , where c represents the estimation error brought by the attack. Liu et al. demonstrated that, as long as the attacker builds the injection vector , the polluted measurements will not be detected by the measurement residual scheme.


If the original measurements z can pass the detection, the residual . Through (12) from (Liu et al., 2011), we learn that the measurement residual will be the same when . Therefore, the crafted measurements from the attacker will not be detected. ∎


Besides, (Liu et al., 2011) also provided the approach to effectively find vector that will meet the attack requirement. Let and matrix . In order to have , a needs to be a solution of the homogeneous equation , as shown in (13).


Another problem of generating a is when will (13c) have a solution. Liu et al. prove that, suppose the attacker compromises meters, as long as , there always exists non-zero attack vector . We refer the readers to (Liu et al., 2011) for the detailed proof.

c.2. Experiment Implementation

We utilize the MATPOWER (Zimmerman et al., 2010) library to derive the H

matrix of the system and generate the power flow measurement data that follows Gaussian distribution. We also implement the FDIA using MATLAB. The perturbation we injected to generate false data follows a uniform distribution. We make two datasets for the defender and the attacker respectively. For each dataset, there are around 65,000 records with half records are polluted. We label the normal records as 0 and attack records as 1 and use one-hot encoding for the labels.

We investigate the scenarios that there are 15 measurements being compromised by the attacker, with the randomly generated compromised index vector and corresponding constraint matrix ( in (13)). We generate 5,000 false records in the test datasets.

After that, we train two deep learning models based on the training datasets accordingly, with 75% records in the dataset used for training and 25% for testing. We use simple fully connected neural networks as the ML models and the model structures are shown in Table 3. Both the models are trained with a 0.0001 leaning rate, 512 batch size, and a cross-entropy loss function. The deep learning models are implemented using Tensorflow and the Keras library and are trained on a Windows 10 machine with an Intel i7 CPU, 16G memory and an NVIDIA GeForce 1070 graphics card. The training process is around 1 minutes for each model. Through evaluation, the overall detection accuracy of the defender’s model is 98.4% and the attacker’s model is 98.2%.

input 46 46

20 ReLu

20 ReLu
2 20 ReLu 20 ReLu
3 40 ReLu 20 ReLu
2 20 ReLu 20 ReLu
4 20 ReLu 2 Softmax
5 2 Softmax

Table 3. Model Structure - FDIA

Appendix D Water Treatment Case Study

d.1. SWaT Measurement Constraints

We examined the user manual of the SWaT system and check the structure of the water pipelines. We found some FIT measurements in SWaT should always follow inequality constraints when the whole system is working steadily. Based on the component names described in (Goh et al., 2016), the constraints can be represented as (14), where and are the allowed measurement errors. We utilized the double value of the maximum difference of the corresponding measurements in the SWaT dataset to estimate and , and we had and .


Based on (4), we can represent (14) as follow. And is the vector of measurements of FIT201, FIT301, FIT401, FIT501, FIT502, FIT503 and FIT504 accordingly.