Toward Multimodal Interaction in Scalable Visual Digital Evidence Visualization Using Computer Vision Techniques and ISS

08/01/2018
by   Serguei A. Mokhov, et al.
Concordia University
0

Visualization requirements in Forensic Lucid have to do with different levels of case knowledge abstraction, representation, aggregation, as well as the operational aspects as the final long-term goal of this proposal. It encompasses anything from the finer detailed representation of hierarchical contexts to Forensic Lucid programs, to the documented evidence and its management, its linkage to programs, to evaluation, and to the management of GIPSY software networks. This includes an ability to arbitrarily switch between those views combined with usable multimodal interaction. The purpose is to determine how the findings can be applied to Forensic Lucid and investigation case management. It is also natural to want a convenient and usable evidence visualization, its semantic linkage and the reasoning machinery for it. Thus, we propose a scalable management, visualization, and evaluation of digital evidence using the modified interactive 3D documentary system - Illimitable Space System - (ISS) to represent, semantically link, and provide a usable interface to digital investigators that is navigable via different multimodal interaction techniques using Computer Vision techniques including gestures, as well as eye-gaze and audio.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

05/08/2020

Interactive Sensor Dashboard for Smart Manufacturing

This paper presents development of a smart sensor dashboard for Industry...
04/29/2020

Touch? Speech? or Touch and Speech? Investigating Multimodal Interaction for Visual Network Exploration and Analysis

Interaction plays a vital role during visual network exploration as user...
10/21/2019

Toward automatic comparison of visualization techniques: Application to graph visualization

Many end-user evaluations of data visualization techniques have been run...
09/01/2020

"Hey Model!" – Natural User Interactions and Agency in Accessible Interactive 3D Models

While developments in 3D printing have opened up opportunities for impro...
09/08/2011

Digital Libraries, Conceptual Knowledge Systems, and the Nebula Interface

Concept Analysis provides a principled approach to effective management ...
02/21/2019

DIALOG: A framework for modeling, analysis and reuse of digital forensic knowledge

This paper presents DIALOG (Digital Investigation Ontology); a framework...
09/28/2010

The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

Lucid programs are data-flow programs and can be visually represented as...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

0.1 Introduction

We propose a scalable management, visualization, and evaluation of digital evidence using the modified interactive 3D documentary component of the Illimitable Space System (ISS) to represent, semantically link, and provide a usable interface to digital investigators.

The cyberforensic analysis is one phase of the cybercrime investigation where the investigators strive to produce credible inferences based on evidential information. The source of this information is usually the phases that precede the analysis such as evidence acquisition and encoding. Also, this information can come from an esoteric set of resources that involves computers but is not limited to that seem fit as an evidence by the investigators [1, 9].

Lucid programs are data-flow programs that can be visually composed and illustrated as data-flow graphs as well. Forensic Lucid is one such Lucid dialect that enables investigators to specify and reason about cyberforensic cases. It represents the context of the evaluation of the evidences’ by encoding them along with witness stories, evidential statements and modeling the crime scene to cross-validate claims against the model and perform event reconstruction, potentially within large swaths of digital evidence [1, 9].

In 2004, Gladyshev [10]

introduced the first formal approach to cybercrime investigation. Their approach uses Finite State Automata to describe the digital system as a Finite State Machine for event reconstruction. However, it has an associated learning curve and quite complex for investigators without a formal background in computer science.

Forensic Lucid is designed to explicitly address these drawbacks and aims to be usable, expressive, sound and complete.

One of the many goals of Forensic Lucid is usability via scalable visualization of enormous data under investigation. Recently, there have been significant improvements in the domain of modern 2D and 3D virtual reality environments, which can be easily navigated via a variety of different multimodal interaction techniques. Forensic Lucid aims to up the ante by providing such usability improvements by leveraging modern multimodal techniques in virtual and augmented reality space for the investigators instead of writing a Forensic Lucid program to navigate seamlessly. A combination of gestures, audio commands, eye gaze and hardware controllers are potential candidates to provide navigational and interaction abilities to the investigators. It will enable us to extend the Lucid DFG programming onto Forensic Lucid case modeling and specification [1, 9].

The purpose here is to determine the applicability of these findings to Forensic Lucid and investigation case management. It is also natural to want a convenient and usable evidence visualization, their semantic linkage and the appropriate hardware for the same. The visualization requirements in the context of Forensic Lucid revolve around the different levels of the case knowledge abstraction, its representation, aggregation, and the operational aspects as the final long-term goal of this proposal. It encompasses everything from the finer detailed representation of hierarchical contexts to Forensic Lucid programs, to the documented evidence and its management. It also includes its linkage to programs, to evaluation and to the management of GIPSY software-defined networks along with an ability to arbitrarily switch between those views combined with usable multimodal interaction [1, 9].

0.2 Related Work

In the context of data-flow programming languages, there are quite a few research works and proposals that revolve around graph-based visualizations. The work of Faustini proved in particular a visualization of any Indexical Lucid program as a DFG [11].

In 1995, Jagannathan defined one of the first graph-based visualizations proposals for Lucid programs. He defined different graphic intensional and extensional models for GLU programming [12]. Further, in 1999 Paquet’s doctoral work with multidimensional intensional programs extended on it, followed by the visual parallel programming idea of Stankovic, Orgun, et al[13] [14].

In 2004, Ding provided practical implementation of Paquet’s foundational work within GIPSY in the form of 2D DFGs [13, 15] [15]. Ding provided an automatic bidirectional translation of the intensional programs between their textual and graphical representations by employing lefty’s GUI (Graphviz’s) and dot’s languages [9, 16, 17].

Mokhov proposed an idea of one such “3D editor” within RIPE [18] to visualize, control communication patterns and load balancing in GIPSY. The editor’s idea is to render graphs in a 3D space to allow its users to redistribute demands visually in case of imbalance of workload among the workers. It can be thought of as a virtual 3D remote control accompanied by a miniature expert system which can trigger the planning, caching and load balancing algorithms to learn and perform efficiently every time a related GIPSY application is run.

Similarly, several authors put forward their works on visualizing the configuration, formal systems and load balancing with corresponding graph systems [19, 20, 21, 22, 23].

These works defined key concepts that are consistent with GIPSY [9] visual mechanisms especially, the General Manager Tier (GMT) [24]. Rabah provided the initial configuration management and PoC visualization for GIPSY nodes and tiers via the above mentioned GMT [25].

In 2012, Tao et al. proposed another interesting work of relevance on the visual representation of event sequences, reasoning, and visualization of EHR data [26]. Wang et al. put forward a temporal search algorithm for event visualization of personal history [27]. Monroe et al. noted the challenges of specifying intervals and absences in temporal queries and approach those with the use of a graphical language [28]. This could be of particular use for no-observations [1] in Forensic Lucid case. A recent novel HCI concept of documentary knowledge visual representation and gesture- and speech-based interaction in the Illimitable Space System (ISS) was put forward by Song [29] in 2012. A multimodal case management interaction system was proposed for the German police called Vispol Tangible Interface: An Interactive Scenario Visualization 111http://www.youtube.com/watch?v=_2DywsIPNDQ.

Building upon the above-mentioned works, we propose to illustrate nested evidence, crime scene and the reconstructed event flow after revaluation in the form of a 2D or 3D DFG. The direct impact is to aid the forensic investigators by providing a scalable visualization, management of evidence modeling, encoding by Forensic Lucid [1, 30, 31, 32] and subsequently its evaluation by GIPSY [9].

0.2.1 Conceptual Visualization Design

Deriving from the related research work in context to visualization of Lucid programs, a conceptual example of a 2D DFG that corresponds to a simple Lucid program produced by Paquet [13]. Presently, the rendering of the same is by Ding in 2004 [15] within the GIPSY environment [9].

In Figure 2, page 2 is the conceptual model of hierarchical nesting of the evidential observation sequences , their individual observations (consisting of the properties being observed , details of which are discussed in the referenced related works and in [1, Chapter 7]). These 2D conceptual visualizations are proposed to be renderable at least in 2D or in 3D via an interactive interface to allow modeling complex crime scenes and multidimensional evidence on demand. The end result is envisioned to look like either expanding or “cutting out” nodes or complex-type results as exemplified in Figure 1222cutout image credit is that of Europa found on Wikipedia http://en.wikipedia.org/wiki/File:PIA01130_Interior_of_Europa.jpg from NASA [9].

Figure 1: Modified conceptual example of a 2D DFG with 3D elements
Figure 2: Conceptual example of linked 3D observation nodes

0.3 Multimodal Visual Encoding of Forensic Lucid-based Evidence

Data visualization, not only in the context of cybercrime investigation with Forensic Lucid, but in almost every other domain as well provides numerous advantages in terms of deducing inferences, spotting anomalies and recognizing patterns. However, specifically in case of Forensic Lucid and investigating cybercrimes, it provides additional usability [33] enhancements to aid investigators to illustrate and define semantic links among the related evidence.

Furthermore, the need to visualize forensic cases, digital evidence, and related specification components revolve around providing usability enhancements to aid the investigators. Additionally, putting the program (specification) in 3 dimensions, especially in the modern and affordable augmented and virtual reality spaces (AR/VR) will help in structuring the program along with the case well arranged in a virtual environment with the digital evidence enclosed within 3D spheres. Moreover, navigable in depth to whatever levels of detail possibly via one of the multimodal interactions, although in the given example via clicking, issuing voice commands, gazing, or gesturing [9].

In case of event reconstruction, in particular, the illustrations and comprehension of operational semantics and demand-driven models are much better along keeping in mind their depth and complexity. Ding’s work provides navigational capabilities from a graph to subsequent subgraphs via extending complex nodes to their definitions as in whenever (wvr) or advances upon (upon), their reverse operators, forensic operators, and others [9] found in [1, Chapter 7].

0.3.1 Augmented System Requirements

In order to realize the envisage of DFG visualization of Forensic Lucid programs and their evaluation by GIPSY some immediate considerations are discussed below [9]:

  • Hierarchical evidential statements with deeply nested contexts should be visualized [9].

  • Intentional-imperative hybrid nodes need to be placed in DFGs combining fragments of Lucid and Java programs [9]. Previous research works by GIPSY R&D did not address the aspects to augment the DFGAnalyzer and DFGGenerator from Ding’s work in some fashion to provide support for hybrid GIPSY programs. However, to address this one can think to add an “unexpandable” imperative DFG node to the graph, but depth-wise it won’t be just enough to click their way through. Thus, considering possibilities to make it usable hence expandable recent enhancements in Graphviz and GIPSY can be leveraged to generate Forensic Lucid code from the DFG and vice versa [9].

  • Rabah’s work on visualizing load balancing and communication control patterns for tasks in Euclidean space may as well be leveraged via the GGMT [25].

  • The ability to switch among views such as DFG, evidence, and control, etc., is required as well.

  • In flat-screen, touch-screen, augmented reality or projected environments, 3D DFG interactions are to support click and touch or voice-based call-outs as well as gestures to link or assemble some of the evidence [4].

  • In the virtual reality environment VR controllers and gaze-controlled interactions are essential in addition to the voice and gesture recognition support [34].

0.3.2 Survey of the Visualization Languages and Tools

This work focuses on one of the goals of this research, which is to find the optimal technique with its formal specifications along with being feasible to implement using currently available HCI technologies and a usable one [9].

Graphviz

Ding’s [15] basic bidirectional translation between GIPL and DFG within GIPSY is already a part of the project and exists for GIPL and Indexical Lucid, the two Lucid dialect antecedents. Moreover, Graphviz modern version now supports integration with Eclipse [35], thus GIPSY’s IDE—RIPE (Run-time Interactive Programming Environment)—can be an Eclipse-based plug-in as well [9].

PureData

The PureData [36] language by Puckette along with its commercial divisions namely (Jitter/Max/MSP [37]) apply a DFG-lie programming by graphically placing inlets and outlets of any data type connected in the form of so-called “patches”. These inlets may have external implementations and sub-graphs in procedural languages. Originally, Puckett’s work used signal processing to process electronic music and videos in order to produce interactive artistic and performative processes and was extended beyond that domain. The notion of external plug-ins in PureData allows deep visualization of media in OpenGL which in turn enhances the overall aspect of the process. PureData does draw influence from Lucid as a data flow language as well [9].

Bpel

OpenESB IDE provides visual design capabilities to visually illustrate or create a BPEL (Business Process Execution Language) process along with composite applications in the context of Service Oriented Architectures and Web Services. [38, 39, 40]. These BPEL specifications are translatable to an executable web service composition code in Java language. Not only it provides capabilities in terms of designing flows between structures, parallel, asynchronous, sequential processes and fault realization, but more importantly, BPEL notations have a backing formalism modeled upon based on Petri nets. BPEL specifications’ composite applications actually translate to executable Java web services composition code [9].

Illimitable Space System (ISS)

Original ISS’s research-creation practices focused primarily on interactive multimodal installations and productions with the collaboration of local artistic troupes. It helped mobilizing traditional artists and makes them aware of the new technology in order to express themselves in the new form of artistic approach. It started off as a new HCI in the theatre concept and interactive documentaries and moved to performing arts and alternate realities. Various versions of Illimitable Space System exist for motion capture, signal processing, computer vision, projection mapping including LED control, real-time reaction and control for stage and beyond [29, 34, 41, 42, 43].

ISS and its open-source backend core OpenISS [44]

rely on computer vision techniques and machine learning provided by OpenCV and MARF; motion capture libraries for Kinect depth cameras and others, sound control, input from voice and music, and augmented and virtual reality components to co-create either augmented performance or have an installation or film, or use as an education tool for artists

[43] or children.

“Projected Reality”

We explore an idea of a scalable management, visualization,and evaluation of digital evidence in the context for cybercrime investigation with extensions to the interactive 3D documentary subsystem of the Illimitable Space System (ISSv1) [29]. These modifications would enable investigators to represent and create semantic links among digital evidence within an easy to use interface powered by multimodal interactions including but not limited to eye-gaze, gestures and navigational hardware. That work may scale when properly re-engineered and enhanced to act as an interactive “3D window” into the evidential knowledge base grouped into the semantically linked “bubbles’ visually representing the documented evidence. By moving such a contextual window, or rather, navigating within the theoretically illimitable space an investigator can sort out and reorganize the knowledge items as needed prior launching the reasoning computation. The interaction design aspect would be of a particular usefulness to open up the documented case knowledge and link the relevant witness accounts and group the related knowledge together. This is a proposed solution to the large-scale visualization problem of large volumes of “scrollable” evidence that does not need to be all visualized at once but behave like a snapshot of a storage depot [1].

As an example, stills from the actual ISSv1 installation hosting multimedia data (documentary videos) users can call out by voice or gestures to examine the contents as in Figure 3333http://vimeo.com/51329588. We propose to reorganize the latter into more structured spaces so that the investigators can create semantic links to group the relevant evidences together and for subsequent evaluation by the distributed GIPSY’s backend engine [1]. As exemplified in ISSv1 the interactions here are projected on a wall/screen or appear on a monitor. Currently, the viewable scene/window is sequentially loaded and unloaded from the viewing device (PCs, laptops, or VR headsets) to prevent memory overload. The access is on demand by the device and the design is similar to RAM swapping by an operating system to support virtual memory and particularly in this case a distributed storage with evidential data. Available gesture-based interactions using Kinect and similar depth cameras with OpenCV are the enabling HCI aspects for the investigator to link the evidential items in the 3D space. The gesture-based interactions provide optional assistance by the voice-based controls for speech processing and the corresponding commands to view the evidence in detail. Modern availability of VR headsets and VR phone applications make this process even more accessible, although the storage, space and bandwidth requirements have higher constraints, to begin with. In our general approach, we propose an architecture to enable interactive visual windowing into the digital evidence processing as an investigator aid tool. Thus, the preferred method of interaction during analysis and human insight phases prior to or after distributed processing of the evidence and event reconstruction algorithms.

(a) Corner-projected interactive wall
(b) Monitor rendering of the interactive 3D window
(c) Wall-projected interactive wall
(d) Conceptual 3D visualization and rendering (future)
Figure 3: Interactive documentary using Illimitable Space System (ISSv1) visualization and management [29]
Virtual and Augmented Reality

Virtual Reality (VR) [45, 46, 47] is defined in [48] as “a three-dimensional simulation of the real world or an imaginary world allowing the user to have a sense of physical presence and to manipulate 3D objects, in real-time, inside three-dimensional computer-generated environments.” In [49], authors point out the possibility of exhibiting concepts that a user might not be able to view otherwise and the immersive nature of VR can aid in education thus, can be inferred for investigators as well.

Augmented Reality (AR) has to do with overlaying virtual objects on top of the real ones and a possibility with gesture or gaze based interaction with these objects while maintaining a grasp on the real world without complete immersion (avoiding nausea and other related VR issues).

The real benefit is when both techniques can be combined. ISSv3 in our case was being developing incorporating augmented and virtual reality techniques [34, 7]. We experimented in doing both mobile and desktop version of the mixed reality documentary and recording functionality in Unity some of which is visualized in Figure 4 and in Figure 5.

Figure 4: ISSv3 Examples of th VR environment and some digital content [34]
(a) VR version of AR Concordia buildings; interactive
(b) AR camera on for photograph images
(c) AR/VR lab
(d) Bubbles with footage in them
Figure 5: Interactive documentary using Illimitable Space System (ISSv3) VR/AR [34]

0.4 Concluding Remarks

Moving towards our goal to have a visual 3D DFG-based tool that can model Forensic Lucid case specification, and above discussed choices that provide the abilities to do the same in their own ways we attempt to build upon related research work in this area. However, we do consider the potential of the recent work in virtual reality and augmented reality along with different multimodal interaction techniques that seem most consistent with our aim. So far, Ding’s work on Graphviz, Puckette’s PureData, BPEL and the ISS have drawn our interest and all of them are sound and formally backed standards with some exposure in the industry. While the others may require additional work to specify the credibility and correctness of the bidirectional translation between 3D DFG visualization and Forensic Lucid [9].

The drawbacks of PureData and Graphvizs dot are that their languages lack formal semantics specifications with a few semantic notes along with lexical and grammar related structures [17]. Thus, employing any or all of these will require us to provide translation rules and their equivalent semantics to Forensic Lucid

as in Jarraya work that provides translations between the UML2.0/SysML state/activity diagrams and probabilities in 

[50] when translating to PRISM [9]. ISS is the most scalable approach that can aggregate all the others, but requires significant number of modifications. Given recent advancements in ISSv2 and ISSv3 referenced above including both AR/VR interactions, ISS makes this approach even more appealing and feasible than previously stated [9].

References

  • [1] S. A. Mokhov, “Intensional cyberforensics,” Ph.D. dissertation, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, Sep. 2013, online at http://arxiv.org/abs/1312.0466.
  • [2] Law&Order, “Criminal intent,” Wolf Films in association with Universal Media Studios, 2004.
  • [3] M. Song, S. A. Mokhov, P. Grogono, and S. P. Mudur, “Illimitable Space System as a multimodal interactive artists’ toolbox for real-time performance,” in Proceedings of the SIGGRAPH ASIA 2014 Workshop on Designing Tools for Crafting Interactive Artifacts, ser. SIGGRAPH ASIA’14.   New York, NY, USA: ACM, Dec. 2014, pp. 2:1–2:4.
  • [4] ——, “On a non-web-based multimodal interactive documentary production,” in Proceedings of the 2014 International Conference on Virtual Systems Multimedia (VSMM’2014), H. Thwaites, S. Kenderdine, and J. Shaw, Eds.   IEEE, Dec. 2014, pp. 329–336.
  • [5] M. A. Orgun and E. A. Ashcroft, Eds., Proceedings of ISLIP’95, vol. Intensional Programming I.   World Scientific, May 1995, ISBN: 981-02-2400-1.
  • [6] J. Zhang, S. Bardakjian, M. Li, M. Song, S. A. Mokhov, S. P. Mudur, and J.-C. Bustros, “Towards historical exploration of sites with an augmented reality interactive documentary prototype app,” in Proceedings of Appy Hour, SIGGRAPH’2015.   ACM, Aug. 2015.
  • [7] M. Song, S. A. Mokhov, S. P. Mudur, and J.-C. Bustros, “Demo: Towards historical sightseeing with an augmented reality interactive documentary app,” in Proceedings of the 2015 IEEE Games Entertainment Media Conference (GEM 2015), E. G. Bertozzi, B. Kapralos, N. D. Gershon, and J. R. Parker, Eds.   IEEE, Oct. 2015, pp. 16–17.
  • [8] M. Song, S. A. Mokhov, J. Thomas, and S. P. Mudur, “A case study of the Illimitable Space System v2 and projection mapping,” in SIGGRAPH Asia 2015 Posters, ser. SA’15.   New York, NY, USA: ACM, Nov. 2015, pp. 10:1–10:1.
  • [9] S. A. Mokhov, J. Paquet, and M. Debbabi, “On the need for data flow graph visualization of Forensic Lucid programs and forensic evidence, and their evaluation by GIPSY,” in Proceedings of the Ninth Annual International Conference on Privacy, Security and Trust (PST), 2011.   IEEE Computer Society, Jul. 2011, pp. 120–123, short paper; full version online at http://arxiv.org/abs/1009.5423.
  • [10] P. Gladyshev, “Formalising event reconstruction in digital investigations,” Ph.D. dissertation, Department of Computer Science, University College Dublin, Aug. 2004, online at http://www.formalforensics.org/publications/thesis/index.html.
  • [11] A. A. Faustini, “The equivalence of a denotational and an operational semantics of pure dataflow,” Ph.D. dissertation, University of Warwick, Computer Science Department, Coventry, United Kingdom, 1982.
  • [12] R. Jagannathan, “Intensional and extensional graphical models for GLU programming,” in Intensional Programming I, M. A. Orgun and E. A. Ashcroft, Eds., vol. Intensional Programming I.   World Scientific, May 1995, pp. 63–75.
  • [13] J. Paquet, “Scientific intensional programming,” Ph.D. dissertation, Department of Computer Science, Quebec City, Canada, 1999.
  • [14] N. Stankovic, M. A. Orgun, W. Cai, and K. Zhang, Visual Parallel Programming.   World Scientific Publishing Co., Inc., 2002, ch. 6, pp. 103—129.
  • [15] Y. Ding, “Automated translation between graphical and textual representations of intensional programs in the GIPSY,” Master’s thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, Jun. 2004, http://newton.cs.concordia.ca/~paquet/filetransfer/publications/theses/DingYiminMSc2004.pdf.
  • [16] AT&T Labs Research and Various Contributors, “Graphviz – graph visualization software,” [online], 1996–2012, http://www.graphviz.org/.
  • [17] ——, “The DOT language,” [online], 1996–2012, http://www.graphviz.org/pub/scm/graphviz2/doc/info/lang.html.
  • [18] S. A. Mokhov, Hybrid Intensional Computing in GIPSY: JLucid, Objective Lucid and GICF.   LAP - Lambert Academic Publishing, Mar. 2010, ISBN 978-3-8383-1198-2.
  • [19] C. Zheng and J. R. Heath, “Simulation and visualization of resource allocation, control, and load balancing procedures for a multiprocessor architecture,” in MS’06: Proceedings of the 17th IASTED International Conference on Modelling and Simulation.   Anaheim, CA, USA: ACTA Press, 2006, pp. 382–387.
  • [20] P. C. Vinh and J. P. Bowen, “On the visual representation of configuration in reconfigurable computing,” Electron. Notes Theor. Comput. Sci., vol. 109, pp. 3–15, 2004.
  • [21] G. Allwein and J. Barwise, Eds., Logical reasoning with diagrams.   New York, NY, USA: Oxford University Press, Inc., 1996.
  • [22] R. Bardohl, M. Minas, G. Taentzer, and A. Schürr, “Application of graph transformation to visual languages,” in Handbook of Graph Grammars and Computing by Graph Transformation: Applications, Languages, and Tools.   River Edge, NJ, USA: World Scientific Publishing Co., Inc., 1999, vol. 2, pp. 105–180.
  • [23] N. G. Miller, “A diagrammatic formal system for euclidean geometry,” Ph.D. dissertation, Cornell University, U.S.A, 2001.
  • [24] Y. Ji, “Scalability evaluation of the GIPSY runtime system,” Master’s thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, Mar. 2011, http://spectrum.library.concordia.ca/7152/.
  • [25] S. Rabah, S. A. Mokhov, and J. Paquet, “An interactive graph-based automation assistant: A case study to manage the GIPSY’s distributed multi-tier run-time system,” in Proceedings of the ACM Research in Adaptive and Convergent Systems (RACS 2013), C. Y. Suen, A. Aghdam, M. Guo, J. Hong, and E. Nadimi, Eds.   New York, NY, USA: ACM, Oct. 2011–2013, pp. 387–394, pre-print: http://arxiv.org/abs/1212.4123.
  • [26] C. Tao, K. Wongsuphasawat, K. Clark, C. Plaisant, B. Shneiderman, and C. G. Chute, “Towards event sequence representation, reasoning and visualization for EHR data,” in Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, ser. IHI’12.   New York, NY, USA: ACM, 2012, pp. 801–806.
  • [27] T. D. Wang, A. Deshpande, and B. Shneiderman, “A temporal pattern search algorithm for personal history event visualization,” IEEE Trans. on Knowl. and Data Eng., vol. 24, no. 5, pp. 799–812, May 2012.
  • [28] M. Monroe, R. Lan, J. M. del Olmo, B. Shneiderman, C. Plaisant, and J. Millstein, “The challenges of specifying intervals and absences in temporal queries: a graphical language approach,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ser. CHI’13.   New York, NY, USA: ACM, 2013, pp. 2349–2358.
  • [29] M. Song, “Computer-assisted interactive documentary and performance arts in illimitable space,” Ph.D. dissertation, Special Individualized Program/Computer Science and Software Engineering, Concordia University, Montreal, Canada, Dec. 2012, online at http://spectrum.library.concordia.ca/975072 and http://arxiv.org/abs/1212.6250.
  • [30] S. A. Mokhov, J. Paquet, and M. Debbabi, “Formally specifying operational semantics and language constructs of Forensic Lucid,” in Proceedings of the IT Incident Management and IT Forensics (IMF’08), ser. LNI, O. Göbel, S. Frings, D. Günther, J. Nedon, and D. Schadt, Eds., vol. 140.   GI, Sep. 2008, pp. 197–216, online at http://subs.emis.de/LNI/Proceedings/Proceedings140/gi-proc-140-014.pdf.
  • [31] ——, “Reasoning about a simulated printer case investigation with Forensic Lucid,” in Proceedings of Digital Forensics and Cyber Crime (ICDF2C’11), ser. LNICST, P. Gladyshev and M. K. Rogers, Eds., no. 0088.   Springer Berlin Heidelberg, Oct. 2011, pp. 282–296, accepted and presented in 2011, appeared in 2012; online at http://arxiv.org/abs/0906.5181.
  • [32] S. A. Mokhov and E. Vassev, “Self-forensics through case studies of small to medium software systems,” in Proceedings of IMF’09.   IEEE Computer Society, Sep. 2009, pp. 128–141.
  • [33] Y. Rogers, H. Sharp, and J. Preece, Interaction Design: Beyond Human - Computer Interaction, 3rd ed.   Wiley Publishing, 2011, online resources: id-book.com.
  • [34] S.-S. Bardakjian, S. A. Mokhov, M. Song, and S. P. Mudur, “Issv3: From human motion in the real to the interactive documentary film in ar/vr,” in SIGGRAPH ASIA 2016 Virtual Reality Meets Physical Reality: Modelling and Simulating Virtual Humans and Environments, ser. SA’16.   New York, NY, USA: ACM, 2016, pp. 1:1–1:5, 978-1-4503-4548-4.
  • [35] Eclipse contributors et al., “Eclipse Platform,” eclipse.org, 2000–2014, http://www.eclipse.org, last viewed August 2014.
  • [36] M. Puckette and PD Community, “Pure Data,” [online], 2007–2014, http://puredata.org.
  • [37] Cycling ’74, “Jitter 1.5,” [online], 2005, http://www.cycling74.com/products/jitter.html.
  • [38] OpenESB Contributors, “BPEL service engine,” [online], 2009, http://download.java.net/general/open-esb/docs/jbi-components/bpel-se.html.
  • [39] D. Koenig, “Web services business process execution language (WS-BPEL 2.0): The standards landscape,” Presentation, IBM Software Group, 2007.
  • [40] OASIS Web Services Business Process Execution Language (WSBPEL) TC, “Web Services Business Process Execution Language version 2.0,” [online], Oasis, Apr. 2007, oASIS Standard, http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html.
  • [41] S. A. Mokhov, M. Song, J. Llewellyn, J. Zhang, A. Charette, R. Wu, and S. Ge, “Real-time collection and analysis of 3-Kinect v2 skeleton data in a single application,” in Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH ’16, Anaheim, CA, USA, July 24-28, 2016, Posters Proceedings, 2016, pp. 53:1–53:2.
  • [42] S. A. Mokhov, M. Song, S. Chilkaka, Z. Das, J. Zhang, J. Llewellyn, and S. P. Mudur, “Agile forward-reverse requirements elicitation as a creative design process: A case study of llimitable Space System v2,” Journal of Integrated Design and Process Science, vol. 20, no. 3, pp. 3–37, 2015–2016.
  • [43] S. A. Mokhov, M. Song, S. P. Mudur, and P. Grogono, “Hands-on: Rapid interactive application prototyping for media arts and stage performance,” in ACM SIGGRAPH 2017 Studio, ser. SIGGRAPH’17.   New York, NY, USA: ACM, 2017, pp. 3:1–3:29.
  • [44] S. A. Mokhov et al., “OpenISS core project,” [online], 2016–2018, https://github.com/OpenISS/OpenISS.
  • [45] J. Jerald, The VR Book: Human-Centered Design for Virtual Reality.   New York, NY, USA: Association for Computing Machinery and Morgan & Claypool, 2016, 978-1-97000-112-9.
  • [46] J. Jerald, J. J. LaViola, Jr., and R. Marks, “VR interactions,” in ACM SIGGRAPH 2017 Courses, ser. SIGGRAPH ’17.   New York, NY, USA: ACM, 2017, pp. 19:1–19:105, 978-1-4503-5014-3.
  • [47] J. J. LaViola, Jr., “Context aware 3D gesture recognition for games and virtual reality,” in ACM SIGGRAPH 2015 Courses, ser. SIGGRAPH’15.   New York, NY, USA: ACM, 2015, pp. 10:1–10:61.
  • [48] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual displays,” IEICE TRANSACTIONS on Information and Systems, vol. 77, no. 12, pp. 1321–1329, 1994.
  • [49] J. T. Bell and H. S. Fogler, “The investigation and application of virtual reality as an educational tool,” in Proceedings of the American Society for Engineering Education Annual Conference, 1995, pp. 1718–1728.
  • [50] Y. Jarraya, “Verification and validation of UML and SysML based systems engineering design models,” Ph.D. dissertation, Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada, Feb. 2010.