Code Integrity Attestation for PLCs using Black Box Neural Network Predictions

06/15/2021
by   Yuqi Chen, et al.
0

Cyber-physical systems (CPSs) are widespread in critical domains, and significant damage can be caused if an attacker is able to modify the code of their programmable logic controllers (PLCs). Unfortunately, traditional techniques for attesting code integrity (i.e. verifying that it has not been modified) rely on firmware access or roots-of-trust, neither of which proprietary or legacy PLCs are likely to provide. In this paper, we propose a practical code integrity checking solution based on privacy-preserving black box models that instead attest the input/output behaviour of PLC programs. Using faithful offline copies of the PLC programs, we identify their most important inputs through an information flow analysis, execute them on multiple combinations to collect data, then train neural networks able to predict PLC outputs (i.e. actuator commands) from their inputs. By exploiting the black box nature of the model, our solution maintains the privacy of the original PLC code and does not assume that attackers are unaware of its presence. The trust instead comes from the fact that it is extremely hard to attack the PLC code and neural networks at the same time and with consistent outcomes. We evaluated our approach on a modern six-stage water treatment plant testbed, finding that it could predict actuator states from PLC inputs with near-100 thus could detect all 120 effective code mutations that we subjected the PLCs to. Finally, we found that it is not practically possible to simultaneously modify the PLC code and apply discreet adversarial noise to our attesters in a way that leads to consistent (mis-)predictions.

READ FULL TEXT
research
12/20/2018

Control Behavior Integrity for Distributed Cyber-Physical Systems

Cyber-physical control systems, such as industrial control systems (ICS)...
research
10/07/2019

Approximation-Refinement Testing of Compute-Intensive Cyber-Physical Models: An Approach Based on System Identification

Black-box testing has been extensively applied to test models of Cyber-P...
research
08/09/2018

VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting

Deep learning has become popular, and numerous cloud-based services are ...
research
03/24/2023

Effective black box adversarial attack with handcrafted kernels

We propose a new, simple framework for crafting adversarial examples for...
research
01/03/2018

Learning from Mutants: Using Code Mutation to Learn and Monitor Invariants of a Cyber-Physical System

Cyber-physical systems (CPS) consist of sensors, actuators, and controll...
research
05/09/2022

Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples

With the widespread use of deep neural networks (DNNs) in many areas, mo...
research
08/26/2020

Hybrid Deep Neural Networks to Infer State Models of Black-Box Systems

Inferring behavior model of a running software system is quite useful fo...

Please sign up or login with your details

Forgot password? Click here to reset