Towards Improving Validation, Verification, Crash Investigations, and Event Reconstruction of Flight-Critical Systems with Self-Forensics

06/10/2009 ∙ by Serguei A. Mokhov, et al. ∙ Concordia University 0

This paper introduces a novel concept of self-forensics to complement the standard autonomic self-CHOP properties of the self-managed systems, to be specified in the Forensic Lucid language. We argue that self-forensics, with the forensics taken out of the cybercrime domain, is applicable to "self-dissection" for the purpose of verification of autonomous software and hardware systems of flight-critical systems for automated incident and anomaly analysis and event reconstruction by the engineering teams in a variety of incident scenarios during design and testing as well as actual flight data.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we introduce a new concept for flight-critical integrated software and hardware systems to analyze themselves forensically as needed as well as keeping forensics data for further automated analysis in cases of reports of anomalies, failures, and crashes. We insist this should be a part of the protocol for each system, (even not only flight systems), but any large and/or critical self-managed system.

This proposition is a rehash of the related work of the author during his PhD studies [1, 2] for the NASA spacecraft self-forensics concept as well as a work towards improving the safety and crash investigation of read vehicles with similar means.

We review some of the related work that these ideas are built upon prior describing the requirements for self-forensics components. We describe the general requirements as well as limitations and advantages. This is a draft sketch.

1.1 Applicability Overview and Discussion

Many ideas in this work come from computer forensics and forensic computing. Computer forensics has traditionally been associated with computer crime investigations. We show the approach is useful as an aid in for validation and verification during design, testing, and simulations of aircraft systems as well as during the actual in-flight operations and crash investigations. We earlier argued [1, 2] if the new technologies are built with the self-forensics components, it would even help space and automotive industries or anything robotic and autonomous, and has large complex software systems, including military.

Existing self-diagnostics, computer BIOS reports, S.M.A.R.T. [3] reporting for hard disk as well as many other devices could be a good source for such data computing, i.e. be more forensics-friendly and provide forensics interfaces for self-forensics analysis and investigation as well as allowing engineering teams extracting, analyzing, and reconstructing events using such data.

Thus, we insist that self-forensics, if included earlier in the design and development of the spacecraft, not only helps during the validation and verification, but also a posteriori, during the day-to-day operations of the airborne and spaceborne systems.

Some example cases where self-forensics would have been helpful to analyze anomalies say in the spacecraft, when Mars Exploration Rovers behave strangely [4], or even with one is doing a hard disk recovery, such as from the shuttle Columbia [5], or automatically as well as interactively reasoning about events, possibly speeding up the analysis of the anomalies in subsystems. Another example is when the Hubble Space Telescope was switched from the side A of its instruments to the redundant side B. The self-forensics units would help Hubble to analyze the problem and self-heal later. Of course, the cost of such self-forensic units would not be negligible; however, the cost of self-forensics units maybe well under than the costs of postponing missions, as e.g. happening with the Hubble Space Telescope Servicing Mission 4 (SM4) and the corresponding shuttle processing delay and costs of moving shuttles around [6, 7, 8, 9, 10, 11] and others.

Further more, the concept of self-forensics would be even a greater enhancement and help with flight-critical systems, the blackboxes in aircraft, etc. to help with crash investigations [12].

1.2 Self-Management Properties

The common aspects of self-managing systems, such as self-healing, self-protection, self-optimization, and the like (self-CHOP) are now fairly well understood in the literature and R&D [13, 14, 15, 16, 17, 18, 19]. We augment that list with self-forensics that we would like to be a part of the standard list of autonomous systems specification.

The self-forensics property is meant to embody and formalize all existing and future aspects of self-analysis, self-diagnostics, data collection and storage, software and hardware components (“sensors”) and decision making that were not formalized as such and define a well-established category in the industry and academia. In that view, self-forensics encompasses self-diagnostics, blackbox recording, (Self-Monitoring, Analysis, and Reporting Technology) S.M.A.R.T. reporting [3], and encoding this information in analyzable form of Forensic Lucid (or some other format if desired when the concept matures) for later automated analysis and even reconstruction using the corresponding expert system tool. Optional parallel logging of the forensics events during the normal operation of the aircraft, especially during the blackout periods will further enhance the durability of the live forensics data logged from the spacecraft to the nearby control towers or flight centers.

1.3 Forensic Lucid

Forensic Lucid [20, 21, 22] is a forensic case specification language for automatic deduction and event reconstruction of computer crime incidents. The language itself is general enough to specify any events, their properties, duration, as well as the context-aware system model. We take out the Forensic Lucid from the cybercrime context for application to any autonomous software or hardware systems as an example of self-forensic case specification.

Forensic Lucid is based on Lucid [23, 24, 25, 26, 27] and its various dialects that allow natural expression of various phenomena, inherently parallel, and most importantly, context-aware, i.e. the notion of context is specified as a first-class value in Lucid [28, 29]. Lucid dialects are functional programming languages. All these properties make Forensic Lucid an interesting choice for forensic computing in self-managed systems to complement the existing self-CHOP properties.

Forensic Lucid is also significantly influenced by and is meant to be a usable improvement of the work of Gladyshev et al. on formal forensic analysis and event reconstruction using finite state automate (FSA) to model incidents and reason about them [30, 31].

While Forensic Lucid itself is still being finalized as a part of the PhD work of the author along with its compiler, run-time and development environments, it is well under way to validate its applicability to various use-cases and scenarios.


Forensic Lucid is context-oriented. The basic context entities comprise an observation in Equation 1, observation sequence in Equation 2, and the evidential statement in Equation 3. These terms are inherited from [30, 31] and represent the context of evaluation in Forensic Lucid. An observation of a property has a duration between . This was the original definition of and the author later added

to amend each observation with weight factor or probability or credibility to later further model in accordance with the mathematical theory of evidence 

[32]. is an optional timestamp as in a forensic log for that property. An observation sequence represents, which is a chronologically ordered collection of observations represent a story witnessed by someone or something or encodes a description of some evidence. All these stories (observation sequences, or logs, if you will) all together represent an evidential statement about an incident. The evidential statement is an unordered collection of observation sequences. The property itself can encode anything of interest – an element of any data type or even another Forensic Lucid expression, or an object instance hierarchy or an event.


Having constructed the context, one needs to built a transition function and its inverse . The generic versions of them are provided by Forensic Lucid [21] based on [31, 30], but the investigation-specific one has to be built, potentially visually, by the engineering team, which can be done even before the system, such as spacecraft, launches, if the self-forensics aspect is included into the design from the start. The specific takes evidential statement as an argument and the generic one takes the specific one.


Context-aware, built upon the intensional logic and the Lucid language that existed in the literature and math for more then 30 years and served initial purpose of program verification [25, 26].

2 Self-Forensics Application and Requirements

In this section we elaborate in some detail on the application of self-forensics and its requirements that must be formal and included in the design of the flight systems early on.

2.1 Application

There are usually some instruments and sensors on board of many aircraft and airborne vehicles these days, including high-end computers. All of them can also have additional functional units to observe other instruments and components, both hardware and software, for anomalies and log them appropriately for forensics purposes.

Such forensic specification is also useful to train new engineers on a team, flight controllers, and others involved, in data analysis, to avoid potentially overlooking data and making incorrect ad-hoc decisions. In an Forensic Lucid-based expert system (that’s what was the original purpose of Forensic Lucid in the first place in cybercrime investigations) one can accumulate a number of contextual facts from the self-forensic evidence and the trainees can construct their theories of what happened and see of their theories agree with the evidential data. Overtime, (unlike in most cybercrime investigates) it can accumulate the general enough contextual knowledge base of encoded facts that can be analyzed across flights, missions, globally and on the web when multiple agencies and aircraft manufactures collaborate.

2.2 Requirements

Here we define the general requirements scope for the autonomic self-forensics property adapted for airborne vehicles:

  • Must always be included in the design and specification.

  • Should be optional if constrained by the severe budget cuts for less critical flight components. Must not be optional for mission critical and safety-critical systems and blackboxes.

  • Must cover all the self-diagnostics events, e.g. S.M.A.R.T.-like capabilities and others.

  • Must have a formal specification (that what it makes it different from just self-diagnostics).

  • Must have tools for automated reasoning and reporting about incident analysis matching the specification, real-time or a posteriori during investigations.

  • Context should be specified in the terms of system specification involving the incidents, e.g. parts and the software and hardware engineering design specification should be formally encoded (e.g. in Forensic Lucid) during the design and manufacturing. This are the static forensic data. The dynamic flight forensics data are recorded in real-time during the vehicle operation.

  • Preservation of forensic evidence must be atomic, reliable, robust, and durable.

  • The forensic data must be able to include any or all related non-forensic data for analysis when needed, e.g. reconnaissance or science images for military, exploration, and scientific aircraft taken by a camera around the time of incident or measurements done around the incident by an instrument or even the entire trace of a lifetime of a system logged somewhere for automated analysis and event reconstruction.

  • Levels of forensic logging and detail should be optionally configurable in collaboration with other design requirements in order not to hog other activities resources, create significant overhead, or fill in the bandwidth of downlinks or to preserve power.

  • Self-forensics components should optionally be duplicated in case themselves also fail.

  • Event co-relation optionally should be specified.

  • Some forensic analysis can be automatically done by the autonomous system itself (provided having enough resources to do so), e.g. when it cannot communicate with flight controllers.

2.3 Limitations

The self-forensics autonomic property is very good to have for automated analysis of simulated (testing and verification) and real incidents in autonomous hardware and software systems in aircraft, but it can not be mandated as absolutely required due to a number of limitations it creates. However, whenever the monetary and time budgets allow, it should be included in the design and development of the autonomous spacecraft, military equipment, or software systems.

  • The cost of the overall aircraft systems will obviously increase.

  • If built into software, the design and development requires functional long-term storage and CPU power.

  • It will likely increase of bandwidth requirements; e.g. for scientific and exploratory aircraft if the science data is doubled in the forensic stream. If the forensic data are mirrored into the scientific one, more than twice bandwidth and storage used.

  • An overhead overall if collect forensics data continuously. Can be offloaded along with the usual science and control data down to the flight control towers or centers periodically.

  • The self-forensics logging and analyzing software ideally should be in ROM or similar durable flash type of memory; but should allow firmware and software upgrades.

  • We do not tackle other autonomic requirements of the system assuming their complete coverage and presence in the system from the earlier developments and literature, such as self-healing, protection, etc.

  • Transition function has to be modeled by the engineering team throughout the design phase and encoded in Forensic Lucid. Luckily the data-flow graph (DFG) [33] interactive development environment (IDE), like a CAD application is to be available.

2.4 Brief Example

  • self-forensic sensors observe subsystems and instruments and of an aircraft in flight

  • every engineering or scientific event is logged using Forensic Lucid

  • each forensic sensor may observes several subsystems or instruments

  • each sensor “composes” a “story” in the for of Forensic Lucid observational sequence about an subsystem or instrument

  • a collection of “stories” from multiple sensors, properly encoded, represent the evidential statement, either during the verififcation or actual flight

  • if an incident (simulated or real) happens; engineers define theories about what happened. The theories are encoded as observation sequences. When evaluated against the collected forensic evidence. Then the evaluating software system (e.g. in the case of author’s PhD work is the General Intensional Programming system or GIPSY) can automatically verify the theory matched up against the context of evidential statement and if the theory agrees with the evidence, meaning this theory has an explanation within the given evidence (and the amount of evidence can be significantly large for “eyeballing” it by humans), then likely the theory is a possible explanation of what has happened. It is possible to have multiple explanations and multiple theories agreeing with the evidence. In the latter case usually the “longer” (in the amount of events and observations involved) theory is preferred or the one that has a higher cumulative weight/credibility . Given the previously collected and accumulated knowledge base of Forensic Lucid facts, some of the analysis and events reconstructed can be done automatically.

3 Conclusion and Future Work

We introduced a novel concept of self-forensics with Forensic Lucid to aid validation and verification of flight-critical systems during the design, manufacturing, and testing, as well as its continuous use during the actual flight and operation of airborne vehicles.

We drafted some of the requirements for such property to be included into the design as well as its potential most prolific limitations today.

We argued that such a property of self-forensics formally grouping, self-monitoring, self-analyzing, self-diagnosing systems along with a decision engine and a common forensics logging data format can standardize the design and development not only airborne vehicles, but also road vehicles and spacecraft, or even marine and submarine vehicles, to improve safety and autonomicity of the ever-increasing complexity of the software and hardware systems in such vehicles and further their analysis when incidents happen.

We outlined some of the background work, including the forensic case specification language, Forensic Lucid, that we adapted from the cybercrime investigations domain to aid the validation and verification of the aircraft subsystems design by proposing logging the forensics data in the Forensic Lucid context format available for manual/interactive analysis on the ground as well as real-time by a corresponding expert system

The generality of the approach manifests itself not only for the design, manufacturing, development, and testing of the flight components, as well as system’s normal operation once deployed, but as well as training of the engineering and flight controller personnel in investigation techniques with a help of a Forensic Lucid-based expert system as well as common format for sharing the data and collaboration between various agencies, such as NASA, and flight hardware and software manufacturers to improve overall safety of modern- and future-day vehicles.

Some of the author’s future work will include the below, which is probably less important for the RFI, but nonetheless planned to be carried out:

  • Amend ASSL [34, 35, 36, 37] to handle the self-forensic property.

  • Implement the notion of self-forensics in the GIPSY [38, 39, 40, 41] and DMARF [42, 43, 44, 45] systems the author is closely working on.

  • Finalize implementation of the Forensic Lucid compiler and the development and run-time environment.

  • Implement large realistic cases encoded in Forensic Lucid to test and validate various aspects of correctness, performance, and usability.


  • [1] Serguei A. Mokhov et al. Self-forensics for autonomous systems. Submitted for publication at IEEE Com. Mag., 2009.
  • [2] Serguei A. Mokhov et al. The role of self-forensics in vehicle crash investigations and event reconstruction. Submitted for publication at Canadian Multidisciplinary Road Safety Conference (CMRSC), 2009.
  • [3] Wikipedia. S.M.A.R.T. — Wikipedia, the free encyclopedia. [Online; accessed 9-February-2009], 2009.
  • [4] NASA. Mars rover team diagnosing unexpected behavior: Mars exploration rover mission status report. [online], January 2009.
  • [5] Brian Fonseca. Shuttle Columbia’s hard drive data recovered from crash site. [online], May 2008.
  • [6] NASA. Hubble status report #3: Hubble science operations deferred while engineers examine new issues. [online], October 2008.
  • [7] NASA. Hubble status report #4. [online], October 2008.
  • [8] Katherine Trinidad, Kyle Herring, Susan Hendrix, and Ed Campion NASA. NASA sets target shuttle launch date for Hubble servicing mission. [online], December 2008.
  • [9] Don Savage, J.D. Harrington, John Yembrick, Michael Curie, and NASA. NASA to discuss Hubble anomaly and servicing mission launch delay. [online], September 2008.
  • [10] NASA. Hubble status report. [online], December 2008.
  • [11] NASA. Hubble status report. [online], December 2008.
  • [12] CNN. ‘Catastrophic failure’ caused North Sea copter crash. [online], April 2009.
  • [13] IBM Corporation. An architectural blueprint for autonomic computing. Technical report, IBM Corporation, 2006.
  • [14] Jeffrey O. Kephart and David M. Chess. The vision of autonomic computing. IEEE Computer, 36(1):41–50, 2003.
  • [15] Walt Truszkowski, Mike Hinchey, James Rash, and Christopher Rouff. NASA’s swarm missions: The challenge of building autonomous software. IT Professional, 6(5):47–52, 2004.
  • [16] Michael G. Hinchey, James L. Rash, Walter Truszkowski, Christopher Rouff, and Roy Sterritt. Autonomous and autonomic swarms. In Software Engineering Research and Practice, pages 36–44. CSREA Press, 2005.
  • [17] Emil Vassev, Michael G. Hinchey, and Joey Paquet. Towards an ASSL specification model for NASA swarm-based exploration missions. In Proceedings of the 23rd Annual ACM Symposium on Applied Computing (SAC 2008) - AC Track, pages 1652–1657. ACM, 2008.
  • [18] M. Parashar and S. Hariri, editors. Autonomic Computing: Concepts, Infrastructure and Applications. CRC Press, December 2006.
  • [19] R. Murch. Autonomic Computing: On Demand Series. IBM Press, Prentice Hall, 2004.
  • [20] Serguei A. Mokhov, Joey Paquet, and Mourad Debbabi. Formally specifying operational semantics and language constructs of Forensic Lucid. In Oliver Göbel, Sandra Frings, Detlef Günther, Jens Nedon, and Dirk Schadt, editors, Proceedings of the IT Incident Management and IT Forensics (IMF’08), pages 197–216, Mannheim, Germany, September 2008. GI. LNI140.
  • [21] Serguei A. Mokhov and Joey Paquet. Formally specifying and proving operational aspects of Forensic Lucid in Isabelle. Technical Report 2008-1-Ait Mohamed, Department of Electrical and Computer Engineering, Concordia University, August 2008. In Theorem Proving in Higher Order Logics (TPHOLs2008): Emerging Trends Proceedings.
  • [22] Serguei A. Mokhov. Encoding forensic multimedia evidence from MARF applications as Forensic Lucid expressions. In Proceedings of CISSE’08, University of Bridgeport, CT, USA, December 2008. Springer. To appear.
  • [23] William Wadge and Edward Ashcroft. Lucid, the Dataflow Programming Language. Academic Press, London, 1985.
  • [24] Edward Ashcroft, Anthony Faustini, Raganswamy Jagannathan, and William Wadge. Multidimensional, Declarative Programming. Oxford University Press, London, 1995.
  • [25] Edward A. Ashcroft and William W. Wadge. Lucid - a formal system for writing and proving programs. SIAM J. Comput., 5(3), 1976.
  • [26] Edward A. Ashcroft and William W. Wadge. Erratum: Lucid - a formal system for writing and proving programs. SIAM J. Comput., 6((1):200), 1977.
  • [27] John Plaice, Blanca Mancilla, Gabriel Ditu, and William W. Wadge. Sequential demand-driven evaluation of eager TransLucid. In Proceedings of the 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC), pages 1266–1271, Turku, Finland, July 2008. IEEE Computer Society.
  • [28] Joey Paquet, Serguei A. Mokhov, and Xin Tong. Design and implementation of context calculus in the GIPSY environment. In Proceedings of the 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC), pages 1278–1283, Turku, Finland, July 2008. IEEE Computer Society.
  • [29] Kaiyu Wan. Lucx: Lucid Enriched with Context. PhD thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2006.
  • [30] Pavel Gladyshev. Finite state machine analysis of a blackmail investigation. International Journal of Digital Evidence, 4(1), 2005.
  • [31] Pavel Gladyshev and Ahmed Patel. Finite state machine approach to digital event reconstruction. Digital Investigation Journal, 2(1), 2004.
  • [32] G. Shafer. The Mathematical Theory of Evidence. Princeton University Press, 1976.
  • [33] Yi Min Ding. Bi-directional translation between data-flow graphs and Lucid programs in the GIPSY environment. Master’s thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2004.
  • [34] Emil Vassev and Joey Paquet. ASSL – Autonomic System Specification Language. In Proceedings if the 31st Annual IEEE / NASA Software Engineering Workshop (SEW-31), pages 300–309, Baltimore, MD, USA, March 2007. NASA/IEEE, IEEE Computer Society.
  • [35] Emil Vassev and Joey Paquet. Towards an autonomic element architecture for ASSL. In Proceedings of the 29th IEEE International Conference on Software Engineering / Software Engineering for Adaptive and Self-managing Systems (ICSE 2007 SEAMS), page 4, Minneapolis, MN, USA, May 2007. IEEE.
  • [36] Emil Vassev, Heng Kuang, Olga Ormandjieva, and Joey Paquet. Reactive, distributed and autonomic computing aspects of AS-TRM. In Proceedings of the 1st International Conference on Software and Data Technologies - ICSOFT’06, pages 196–202, 2006.
  • [37] Emil I. Vassev. Towards a Framework for Specification and Code Generation of Autonomic Systems. PhD thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2008.
  • [38] Emil Vassev and Joey Paquet. Autonomic GIPSY with ASSL. Unpublished, 2007.
  • [39] Joey Paquet and Ai Hua Wu. GIPSY – a platform for the investigation on intensional programming languages. In Proceedings of the 2005 International Conference on Programming Languages and Compilers (PLC 2005), pages 8–14, Las Vegas, USA, June 2005. CSREA Press.
  • [40] Joey Paquet. A multi-tier architecture for the distributed eductive execution of hybrid intensional programs. In Proceedings of 2nd IEEE Workshop in Software Engineering of Context Aware Systems (SECASA’09). IEEE Computer Society, 2009. To appear.
  • [41] Serguei A. Mokhov and Joey Paquet. Using the General Intensional Programming System (GIPSY) for evaluation of higher-order intensional logic (HOIL) expressions. Submitted for publication at LFTMP’09, 2009.
  • [42] Serguei A. Mokhov and Rajagopalan Jayakumar. Distributed modular audio recognition framework (DMARF) and its applications over web services. In Proceedings of TeNe’08, University of Bridgeport, CT, USA, December 2008. Springer. To appear.
  • [43] Emil Vassev and Serguei A. Mokhov. Towards autonomic specification of Distributed MARF with ASSL: Self-healing. Submitted for publication to Middleware’09, 2009.
  • [44] Serguei A. Mokhov and Emil Vassev. Autonomic specification of self-protection for Distributed MARF with ASSL. In Proceedings of C3S2E’09, pages 175–183, New York, NY, USA, May 2009. ACM.
  • [45] Emil Vassev and Serguei A. Mokhov. Self-optimization property in autonomic specification of Distributed MARF with ASSL. In Proceedings of ICSOFT’09, Sofia, Bulgaria, July 2009. INSTICC. To appear.