In this article, we investigate the use of knowledge based methods for selecting a suitable Design of Experiments (DoE) aiming at evaluating the performance of big data systems including human factor interactions.
Formalising scientific and engineering experiments makes them reproducible, easier to share and search. It lets their results get connected to experimental outputs of future studies and accelerates their interpretation. In order to make these principles easily applicable, different possibilities for their formalisation are subject to ongoing efforts. The contribution of this article is the initialisation of an enrichment of priorly proposed concept descriptions, up to the point where a DoE can be proposed by a non-expert. The approach is not claiming comprehensiveness but advocates a stronger use of semantic web technologies for making DoE specific knowledge accessible to other domain descriptions.
Design of Experiments is a collective of principles, statistical approaches and models for planning and performing experiments as well as analysing their results Fisher.1937 , Montgomery.2013 . Typically, the subject of experiment is reduced to a system with input and output variables. Some or all controllable input variables, the so called factors, are varied according to an experimental plan which specifies the values of the variables, also called factor levels. After feeding the system with a set of input variables the output is observed. As the output depends on the system behaviour and both controllable and possibly uncontrollable input variables, the goal of DoE is the quantification of the functional relation between the input and output of the system. For the functional description of different subjects of experiment and different hypotheses to test, a large body of knowledge exists either for extracting the maximum amount of information from a given number of experiments or for minimising the number of experiments with respect to a required level of confidence in information.
Two main challenges arise during the assessment of a big data system with a Design of Experiments. Firstly, big data variations translate seamlessly into a large number of factors with a multiplicity of possible factor levels. Secondly, the different components of the big data system implement either deterministic or non-deterministic processes and yield different output types, e.g. continuous or multinomial. Thus, for a thorough assessment the system needs to be unfolded into its components. Again, this increases the number of necessary experiments but additionally and more importantly introduces the necessity of using completely different types of DoE.
For addressing the different types of DoE, an ontology based method is proposed to select suitable DoEs, based on the properties of the components. For the subsequent reduction of the number of experiments knowledge of the application domain and knowledge of the DoE domain are combined. After the performance of the experiments, their results are interpreted at the component level by the same ontology based method which allows alternative interpretations and proposes the performance of additional experiments depending on the experimental results. At the system level, the results of the component level experiments are recombined and the formulation of hypotheses for the system behavior is enabled.
To evaluate our approach, we consider a big data system developed to support Maritime Situation Awareness (MSA). For maritime surveillance operators situational awareness is necessary to assert a remotely happening situation correctly. As the actual situation at sea is only partially observable, the operators have to base their estimates on different data sources with different degrees of liabilities. Typical tasks for operators are the prevention of collisions, the identification of vessels in distress, the verification and observation of the respect of protected areas and fishing pressure, as well as illicit activities and human trafficking at sea.
Each of these situations corresponds to a specific meaning. During operations an actual situation needs to be verified by the operator according to all possible interpretations of the available but incomplete and often incorrect data. While investigating one situation, the operator is less responsive to other simultaneously happening situations. Thus, Maritime Situational Indicators (MSI) shall annotate data in order to allow for a faster awareness creation and to ease a prioritisation of the actual situations.
2 Related Work
Important contributions come from the field of design of experiments and maritime situational awareness, situation assessment or domain awareness. Both domains show an increasing adherence to languages with well-understood semantics like first-order-logic and description logic, as well as newly developed extensions of those.
2.1 Semantic representations of DoE
In the domain of DoE existing formalisations use non-probabilistic languages, at most. As the described evaluation process focuses on big data solutions, the listed literature mainly originates from the domains of computer experiments and human machine interface experiments.
In the domain of computer experiments Do et al. provide an overview of empirical techniques for software testing identifying two approaches Do.2005 . Firstly, controlled experiments rely on the precise variation of given variables. Secondly, and complementarily, case studies follow possible scenarios of usage. Both approaches aim for the replicability of the performed experiments, the possibility to aggregate their results beyond the anticipated scope of design of experiments and by these means to validate the significance of their results in form of models. Precursors for enabling these benefits can be seen in the interpretability of the experimental results, e.g. by the documentation or standardisation, and in infrastructures allowing to share and connect artifacts Do.2005 , MLSchema
. In the data mining and machine learning domain recent efforts lead to the W3C ML Schema Community Group with the goal to support the development of a data exchange standard for experimental data by unifying existing, more specific schemataMLSchema . More specifically, the group pools former efforts on DMOP Keet.2015 , Expose, the OpenML related Ontology of Vanschoren Vanschoren.2012 , as well as the contributions of Soldatova et al. in the form of EXPO and OntoDM Soldatova.2006 , Panov.2014 . For the formalisation of scientific experiments, especially Soldatova.2006 is of interest.
Another approach from Blondet et al. propose a DoE ontology in the context of the product development process. The aim is to describe executed DoE, thus to make them queryable during the design phase of later experiments Blondet.2018 . For the definition of a DoE three sampling methods, called ’types of DoE’ are differentiated, namely marginal constraints like Latin Hypercube, factorial designs and low discrepancy or quasi random, e.g. Halton Blondet.2016 . Subsequently further information requirements are added, including the number of experiments, an initial model, one or more factors, one or more outputs, a surrogate model and some analysis methods Blondet.2018 . Also constraints like the limited amount of time or thresholds like accuracy of the predictions and surrogate model predictivity are supposed to be specified Blondet.2016 and extended by a maximum number of experiments Blondet.2018 . Nevertheless, the proposed rules and concept descriptions do not allow for the deduction of a subset of DoE types fulfilling the set of constraints. Instead, SPARQL queries are used to find existing DoE instances which are similar in the sense of generalisation. An assessment of the quality of the DoE instances is not performed, nor is it described which constraints exclude certain DoE.
All reviewed contributions aim at the description of experiments with respect to the proposed concepts. Complementary to this, the following paragraphs propose to expand the existing formalisations on knowledge from the domain of Design of Experiments that helps exclude designs which are not appropriate for a given set of constraints on the experiments. Especially, the knowledge is supposed to be captured in the T-Box of the Ontology, thus supports the Design of Experiments which start without prior knowledge of the domain or instances in the A-Box.
2.2 Semantic representations of Maritime Situational Awareness
In the maritime domain contributions towards situational awareness exist from solely probabilistic approaches to purely possibilistic approaches, like Roy.2010 and VanDenBroek.2011 . Promising approaches combine the benefits of both worlds. One example is the work of Snidaro.2015 which uses Markov Logic Networks, i.e. weighted first-order-logic rules, for detecting maritime events. Also the work on Bayesian extensions of OWL Carvalho.2011 and Laskey.2011 is closely related to MSA as well. Related works can also be found in the avionic domain, e.g. describing data uncertainty and veracity for situation assessment with OWL Insaurralde.2017 .
3 System decomposition and recombination
Typically, the quantification of a systems properties with designed experiments is done on the entire system taking into account the whole array of variations of the input variables. For reducing the DoE model complexity and the number of experiments, in the proposed approach both the input data and the system are first decomposed, then recombined to obtain an evaluation at inter-component level. Contrary to wide-spread systems engineering approaches and to systematic product testing and development methods, the proposed methodology aims for the initialisation of the evaluation subsequent to the component deployment but prior to the finalisation of the big data system. The assumption is that the system is developed according to a modular architecture, where system components are independently implemented then integrated in a second stage into the final big data solution. In this case it is not necessary to wait for the complete solution deployment to start the evaluation but the design of experiments leverages the testing of the modular components. In addition, the experiments on the complete solution may be customised depending on the performance of the single components.
3.1 Input description
Starting from the system level, the research questions ask for the possible interference of performance of the system when confronted to typical dataset or application specific variations of any of the big data dimensions Kitchin.2016 .
In the context of Maritime Situation Awareness, surveillance systems merge, process and analyse information of various type, collected from multiple sources of different nature. These include physical sensors, automated systems, as well as persons. To give some examples, Terrestrial-Automatic Identification System (T-AIS), High Frequency (HF)-Radars, Long Range Tracking and Identification (LRIT) systems, CTD (Conductivity, Temperature, “Depth”) sensors, sea state models, underwater hydrophones, tracking software, human operators and analysts, are all sources of information that may be elaborated by surveillance system with the aim of increasing awareness. Real-time data streams from surveillance and environmental sensors are merged and put in context with databases, registries and intelligent reports, according to the mission and the specific operational task at hand. Because of the rapid growth in the use of AIS data transceivers on vessels and the surge in satellite received AIS data, data-driven maritime event detection Patroumpas.2015 based on AIS have been developed, as well as data-driven approaches in support to environmental protection and informed policy making Natale.2015 . Also, many commercial systems exist which use AIS as main source of information.
All this information has different formats, and could be structured or semi-structured according to specification and standards, as in the case of AIS, or completely unstructured.
Maritime surveillance systems present many aspects of interest for big data experiments, with the most important big data properties (and corresponding challenges) being volume, velocity, variety and veracity. Even considering systems based mainly on AIS, infrastructures able to to cope with high volume and velocity are necessary. On the other hand, due to the nature of AIS data, the veracity of the data can vary dramatically but can also be better assessed by using a variety of data sources from different data source types Snidaro.2015 . Each component of a maritime surveillance system is subject to the performance assessment of one or multiple dimensions of big data variation.
3.2 System decomposition
In order to evaluate the system performance, the system is broken-down along two, complementary, functional dimensions. On the one hand, it is divided into the software components it is made of, according to the designed software architecture. Such a decomposition enables the independent and simultaneous evaluation of deployed subsystems while other subsystems are still under development. The system is decomposed into subsystems according to its topology, for instance according to the interfaces to the publisher-subscriber architecture and for a refined decomposition according to the functionalities. Each subsystem ingests an input, performs one or multiple functions processing it and finally, issues an output. The output of one subsystem is typically the input of another subsystem. The independence of the subcomponents guarantees that, after measuring the performance of experiments on one subsystem, changes on this subsystem do not necessarily imply the expiration of the results on a higher level of component aggregation but result in a well-framed set of additional experiments.
On the other hand, the system and component’s outcome may be evaluated semantically-wisely, with respect to the amount of knowledge they contribute to increase awareness. In this case, we can distinguish experiments performed at data processing level (which do not necessarily require domain knowledge), at indicator level (where performance evaluation requires some domain contextualisation), and at scenario level (where the user is involved in the experiment, which unfolds according to a domain-driven plot).
These two dimensions are complementary and may be leveraged to reduce the complexity of system evaluation, because performance evaluation is first done independently piece-wise along the two dimensions, then aggregated along the same dimensions to obtain the overall evaluation.
An example of two-dimensional decomposition is given in Figure 1, which schematically illustrates a Maritime Surveillance System. The software components implement the Data Processing and the Graphical User Interface layers of the architecture. At the interface layer, the user may be involved and domain-driven scenarios may be evaluated.
At the intermediate layer, Maritime Situational Indicators (MSI) are examples of contextualised indicators that are outcome by subsystems and act as a semantic bridge between the software components and the domain. In Maritime Situation Awareness, MSIs are indicative of situations the operator or the analyst should be aware of to assess the situation. They may represent behavioural vessel events (high speed, gaps in communication, change of direction), whose collective evaluation is used by operators to assess potential maritime security threats.
The system, which is designed to increase awareness, has different modules (, …, ) implementing different functionalities that altogether concur to the detection of MSI of interest. The data fed into the system varies in type and veracity.
Note that DoEs differ for deterministic and non-deterministic sub-systems. In the following, the computational processes are assumed to be deterministic, if not stated differently. Only the subsystems including human factors are treated as non-deterministic. When deterministic and non-deterministic sub-systems are combined, the resulting system is treated as non-deterministic system with respect to the selection of DoE methods, as formalised by the following axiom:
For the kind of big data systems discussed herein, we will discuss in the following semantic-wise decompositions for scenario level and data processing level evaluation. Scenario level evaluation, which requires decomposition involving the human factor, may be further differentiated if the evaluation assesses the graphical user interface or the entire big data solution. At data processing level, no human factor needs to be included in the decomposition, and in the case of a MSA system, data processing components may be evaluated using a different decomposition depending if the components produce MSI or not.
Depending on the level of decomposition, different assumptions are necessary for stating and investigating possible research questions. Depending on the research question, the factors and factor levels are selected.
3.2.1 Scenario level: Human factor and entire big data solution
The goal of the scenario level assessment is the quantification of the true and false detections of the expert user. The system boundary for the scenario level assessment includes the full system prototype and an expert user.
All components invisible to the expert user can be discarded from the scenario level assessment and treated separately in the MSI- or data analytics assessment in a preliminary evaluation. A second evaluation on scenario level is necessary to evaluate the impact of the system latency on the degree of task fulfillment of the operator which needs to include all components. For these reasons, the scenario level assessment focuses on the variation of MSIs, synopses and context information. Both variety and veracity variations are considered. Volume and velocity variations are not considered explicitly, because in the considered application scenario, the ratio of vessels per expert user is guaranteed to be kept constant by the size of the control area. The fluctuations of the number of vessels inside the test data set is assumed to be representative for the test area.
In this setup, the case of a good performing expert user compensating a bad performance of the system prototype can not be distinguished from the case of a bad performing expert user interacting with a good performing prototype. These alternative interpretations Soldatova.2006 need to be addressed by the DoE Cook.1979 for strengthening internal validity and external validity.Thus, the data for the experiments is either taken from actual recordings or is modified in a minimalistic fashion and with respecting the coherence of data subsets, e.g. ocean conditions and vessel behavior. A detailed description of the method for the creation of data with external validity is subject to an upcoming publication.
3.2.2 Scenario level: Human factor and graphical user interface
For the exemplar characterisation of this subsystem, the MSIs are considered as input. The research questions for this scenario level decomposition investigate the contribution of MSIs to the scenario specific task fulfillment and are presented in the following, bottom-up. For each research question, the reduction of the MSI combinations, corresponding to the high level of a 2-level DoE, is also described.
After the identification of suitable MSI combinations, the underlying datasets and all eventual intermediate data aggregation states of upstream or downstream components are checked for conformance or exclusion respectively. All data subsets with conflicting interpretation, e.g. which relate to excluded MSIs, are removed from the experimental dataset.
Research question: Can the user assign a meaning to the symbols of the MSIs?
All MSIs need to be evaluated one by one, without combinations.
Research question: Can the user distinguish situations in which the MSI fit the AIS data from situations in which the MSI do not fit?
Only a subset of MSIs needs to be evaluated. The necessary MSI combinations are selected by a domain expert who identifies the logical relationship between the MSIs: There are MSIs that are always (positively or negatively) correlated with each other. The selected subset of MSI needs builds the bases for the domain constraints derived from the following two research questions. For interchangeable MSI and the initial selection takes into account:
Research question: Can the user interpret a situation represented by multiple MSI?
Only a subset of MSI needs to be evaluated. The selection of meaningful MSI combinations is effected according to instantiations of the respective scenario by a domain expert. As the generic scenario descriptions allow for the description of mutually excluding sub scenarios, the instantiation of those sub scenarios yield representative MSI combinations which simultaneously fulfill additional domain constraints impacting on the selection of DoE, e.g. the number of MSIs per vessel is unlikely to exceed three. The maritime domain specific constraints identified in 2) need to remain respected.
Research question: Can the user distinguish, thus prioritise situations or even assign different criticality estimates to different situations?
Only a subset of MSI needs to be evaluated. The prioritisation needs to be applied only on those situation descriptions which are interpreted correctly in the prior step.
3.2.3 Data processing level
There are two main goals for the evaluation on the data processing level. For the components that output MSIs, the main goal is the quantification of the veracity of the component results with respect to the combinations of true and false, positive and negative detections for each MSI. The main goal of the evaluation of components located upstream from the MSI components is the quantification of the variation of the velocity and the volume of the data.
3.3 Crossing input factors of components and data variations
As all components are addressed by all dimensions of big data variations, a reduction of the complexity can be reached by the more detailed identification of infeasible combinations of input factor levels for each component. If the infeasible combination involves a connected component situated upstream or downstream, the corresponding factor levels on all connected components can be excluded.
The constraint can be formalised in the ontology as a complex role inclusion axiom:
An example for this restriction, is the exclusion of AIS data prompted by contradicting MSIs. Data which lead to contradicting MSI can be excluded from the design space. The exploitation of this input factor reduction is continued in the 4. Given the attainment of experimental results on the component level, the merging of those results to the system level are described in the following.
3.4 System recombination and result synthesis
In the reverse sense of decomposition, the recombination of the performances of the subsystems to larger subsystems and finally to the entire system yield a reasonable performance hypothesis of the system. Further experiments are necessary to evaluate the interactions between recombined subsystems.
On the component level, each component that is evaluated can be assigned to a class with a specification of the effected experiment. Additional experiments on other component add knowledge on the same level. For deducing implication on larger subsystems i.a. on a higher level of recombination, the relationships between the evaluated components need to be taken into account. Following the description logic syntax, classes or concepts are identified by capitals while binary relationships or roles have a small first letter. For the deduction of subsystems consisting of both interconnected and evaluated components, a combination of local reflexivity and role compositions is proposed:
The concept allows for an automated assertion of the components. Based on this pattern, sub concepts and sub properties can be defined for the refinement of the respective qualitative evaluation level reached, e.g. for formalising the binary status of requirement achievement. An extension for the quantification of the evaluation is introduced by the specification of the concept definition of to :
As shown in Fig. 2 a), this description does not differ between interconnected and separated but connected subsystems. Thus, for addressing only interconnected and evaluated components an additional discriminator is needed whose result is depicted in Fig. 2 b). For this, the definition of a connected component needs to be specified with respect to a reference component, e.g. C11:
Additionally, the role has to be defined as transitive role so that components, both directly and indirectly connected to the reference component, are considered to fulfill the preceding definition.
Based on these axioms, different logic or arithmetic aggregation functions can be used to project the evaluation result from the component level to the next level of aggregation. Examples for aggregation functions for performance measures are completeness and throughput. Completeness is only given, if all components evaluated are complete. The throughput is defined by the component with smallest throughput of the system:
4 An Evaluation Ontology for MSA information Sources and Experiments
In this section we propose a formalisation of domain specific experiments.
Starting from existing formalisations of experiments like EXPO, a subset of the description of scientific experiments consist of Soldatova.2006 :
.1 ScientificExperiment. .2 ExperimentalDesign. .3 ExperimentalModel. .4 TargetVariable. .4 Factor. .3 SubjectOfExperiment. .2 ExperimentalResults.
In the following, the previously mentioned concepts are specified to the extend that examples of contradicting design decisions lead to inconsistencies in the ontology. This shall ease their detection and their replacement by consistent or even favorable design decisions.
Starting from the first system decomposition in 3, two subjects of experiment can be distinguished, with characteristic properties: 1) systems with human factor, which are non-deterministic, where a limited number of attributes should be considered, and where a controllable nuisance factor is given by the experience of experimentee; 2) systems without human factor, which are deterministic and no controllable or uncontrollable nuisance factors are included.
For these two subjects of experiment the DoEs to choose are very different. As DoE is typically applied on non-deterministic processes, the fundamental principles of DoE are applicable to the system with human factor:
Given a controllable nuisance factor like ’years of experience at sea’ or ’years of experience as maritime operator’ the necessary use of blocking influences the number of experiments per experimentee. Additional restrictions including a maximum number of experiments per experimentee allows for a reduction of the candidate designs.
Tables including the design and the number of experiments can be used to instantiate the set of candidate designs, see e.g. cavazzuti.2012 . Designs which are using sampling methods where the experimenter can choose the number of experiments need to be distinguished differently. With an increasing number of restrictions, especially on the allowed combinations of factor levels, Optimal Designs are offering locally optimal Designs, given an optimality criterion.
D-Optimal Design is probably the most widespread Optimal Design, maximising the statistical efficiency. For the application in human factor experiments, the additional consideration of a maximum number of factors or attributes shown to the experimentee are an important criterion for maximising the consistency of the human factor responses, thus the external validity of the experimentsLouviere.2008 .
The DoE to choose depends on the effects to be tested, thus on the model of the response variablecavazzuti.2012 . If only main effects are of interest, e.g. a full factorial design may be used but normally implies with a much larger number of experiments than e.g. a Plackett-Burman design. Vice versa, a Plackett-Burman design does not allow for the separat estimation of main effects and all interactions, i.e. some effects can be confounded. For an experiment where the effect of interactions is not negligible, the use of such designs is not appropriate, thus excluded by the following description:
Disjointness of model concepts:
Accordingly, the use of a Plackett-Burman design excludes the selection of main effect models when confounding shall be excluded. Given the type of response variable, a similar procedure is applied to the model selection.
5 Conclusions and Perspectives
The evaluation of big data systems requires a large set of possible experiments due to the amount of different factors and factor levels. For the decomposition and recombination of large systems into subsystems, in this paper a unique procedure with different concepts is proposed that allows for the transfer of the subsystem results to the system level. Domain specific restrictions can be used to reduce the number of experiments in DoE by excluding infeasible or unnecessary experiments. As typically multiple domains are involved in the evaluation of a big data system, restrictions on multiple domains can be exploited in combination to restrain the set of relevant experiments. The proposed formalisation of restrictions may be used in order to benefit of combining descriptions from different domains. Both for systems with human factors and computational processes the exclusion of inappropriate DoE is described axiomatically.
The proposed domain restrictions can alternatively be learned from data. Thereby, the restrictions which refer to a perfect world can be extended and refined with restrictions from the actual, imperfect data (e.g. in the maritime domain, fishing vessels often use erroneously the AIS status at anchor while fishing, even though they are not anchored) and restrictions from the big data system (e.g. AIS status at anchor does not lead to MSI at anchor or moored
, because the vessel is moving, but is additionally classified asmovement ability affected, because trawling). Additionally, for the specification of fractional factorial designs, the mandatory selection of aliased effects of factors can benefit from the learning and formalisation of confounding rules from the domain specific data. Future work is also needed for describing the recombination of system properties whose values aggregate, e.g. latency or processing time.
This work is supported by the Big Data Analytics for Time Critical Mobility Forecasting (datAcron) project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 687591.
-  ML Schema core specification. Accessed: 2018-02-08.
-  G. Blondet, J. Le Duigou, and N. Boudaoud. Ode: an ontology for numerical design of experiments. Procedia CIRP, 50:496–501, 2016.
-  G. Blondet, J. Le Duigou, N. Boudaoud, and B. Eynard. An ontology for numerical design of experiments processes. Computers in Industry, 94:26–40, 2018.
-  R. N. Carvalho, R. Haberlin, P. C. G. Costa, K. B. Laskey, and K.-C. Chang. Modeling a probabilistic ontology for maritime domain awareness. In Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pages 1–8. IEEE, 2011.
-  M. Cavazzuti. Optimization methods: from theory to design scientific and technological aspects in mechanics. Springer Science & Business Media, 2012.
-  T. Cook and D. Campbell. Quasi-experimentation design and analysis issues for field settings. Rand McNally, 1979.
-  H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405–435, 2005.
-  R. A. Fisher. The design of experiments. Oliver And Boyd; Edinburgh; London, 1937.
-  C. C. Insaurralde and E. Blasch. Veracity metrics for ontological decision-making support in avionics analytics.
-  C. M. Keet, A. Ławrynowicz, C. d’Amato, A. Kalousis, P. Nguyen, R. Palma, R. Stevens, and M. Hilario. The data mining optimization ontology. Web Semantics: Science, Services and Agents on the World Wide Web, 32:43 – 53, 2015.
-  R. Kitchin and G. McArdle. What makes big data, big data? exploring the ontological characteristics of 26 datasets. Big Data & Society, 3(1):2053951716631130, 2016.
-  K. Laskey, R. Haberlin, R. Carvalho, and P. Costa. Pr-owl 2 case study: A maritime domain probabilistic ontology. 808:76–83, 11 2011.
-  J. J. Louviere, T. Islam, N. Wasi, D. Street, and L. Burgess. Designing discrete choice experiments: Do optimal designs come at a price? Journal of Consumer Research, 35(2):360–375, 2008.
-  D. C. Montgomery. Design and Analysis of Experiments. John wiley & sons, 2017.
-  F. Natale, M. Gibin, A. Alessandrini, M. Vespe, and A. Paulrud. Mapping fishing effort through ais data. PLOS ONE, 10(6):1–16, 06 2015.
-  P. Panov, L. Soldatova, and S. Džeroski. Ontology of core data mining entities. Data Mining and Knowledge Discovery, 28(5):1222–1265, Sep 2014.
-  K. Patroumpas, A. Artikis, N. Katzouris, M. Vodas, Y. Theodoridis, and N. Pelekis. Event recognition for maritime surveillance. In Proceedings of the 18th International Conference on Extending Database Technology, EDBT 2015, Brussels, Belgium, March 23-27, 2015., pages 629–640, 2015.
J. Roy and M. Davenport.
Exploitation of maritime domain ontologies for anomaly detection and threat analysis.In Waterside Security Conference (WSS), 2010 International, pages 1–8. IEEE, 2010.
-  L. Snidaro, I. Visentini, and K. Bryan. Fusing uncertain knowledge and evidence for maritime situational awareness via markov logic networks. Information Fusion, 21:159–172, 2015.
-  L. N. Soldatova and R. D. King. An ontology of scientific experiments. Journal of the Royal Society Interface, 3(11):795–803, 2006.
-  A. Van den Broek, R. Neef, P. Hanckmann, S. P. van Gosliga, and D. Van Halsema. Improving maritime situational awareness by fusing sensor information and intelligence. In Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pages 1–8. IEEE, 2011.
-  J. Vanschoren, H. Blockeel, B. Pfahringer, and G. Holmes. Experiment databases. Machine Learning, 87(2):127–158, May 2012.