Intelligent Transport Systems (ITS) present a subclass of Cyber-Physical Systems (CPS) due to the interaction between physical systems (vehicles) and distributed information acquisition and propagation infrastructure (wired/wireless networks, sensors, actuators, processors and software). They are designed to offer innovative services to transportation network users and managers alike, and cover a broad area of advanced applications, ranging from improving management and safety of the transportation network, to infotainment services [Karagiannis et al., 2011]. ITS in general use a vast amount of heterogeneous data streams and information (e.g., behavioral models) that need to be analyzed, combined and actioned upon, which creates a complexity that goes far beyond mere human management. Hence, automation is a must, which in its turn requires formalization of knowledge extracted from the different sources such as sensor networks, documents, tools and from system experts. Moreover, in order to be able to provide support for information interoperability between applications, ITS-related information needs to be semantically enriched in other words identified, categorized and described explicitly. Even though standardization activities have facilitated certain network-level interoperability [IEEE Standards Association, 2010] and Quality of Service (QoS), there are still inconsistencies between ITS groups on how data are modeled on a semantic level, as each organization (e.g., government regulator, commercial provider, academic institution) has created domain-specific information models for its respective ITS applications.
In order to build intelligent systems one needs to start by modeling and formalizing the knowledge that exists (such as domain concepts, behaviors and context) and is retrieved from human experts. Knowledge representation and reasoning (KR&R) is a field in artificial intelligence that aims at building intelligent systems that know about their world and are able to automatically draw conclusions and act upon them, as humans do[Baral, 2003]. A fundamental assumption in KR&R is that knowledge is represented in a tangible form (usually via ontologies), suitable for processing by dedicated reasoning engines. However, up to today, there is a lack of frameworks for general intelligence that will solve several classes of reasoning problems, such as planning, verification and optimization. In addition, it is still not clear to the intelligent software community how to effectively cope with the integration of both declarative and procedural knowledge and some authors advocate for keeping the behavior descriptions separated from the semantic, static domain knowledge [Ghallab et al., 2014].
In this paper, we present a Knowledge Management and Automated Reasoning Framework (KMARF), which targets multiple reasoning problems. The purpose for conceiving KMARF is to (a) reduce system development and deployment time, by reusing as much knowledge as possible, such as domain models, behaviors and reasoning mechanisms, and (b) reduce operational costs by requiring less human involvement in system operations. The strong point of KMARF is that it relies on a knowledge model that combines both declarative and procedural knowledge. This combination allows us to do extensive analysis and provide answers to different reasoning questions, such as “what is the optimal strategy for reaching a desired system state?” or “which actions have led the system to a given state?”. In order to use problem solvers specialized in solving specific classes of reasoning problems, KMARF can be extended with model transformation rules that translate from our knowledge model to the targeted problem solver model.
KMARF is specifically targeting CPS, but it can be applied in other areas as well. In this paper, we focus on one of the CPS domains where KMARF may be applied – the ITS domain. As an illustration example, we consider a transportation planning problem i.e., transport passengers or goods with a minimal cost. A cost can be e.g., the traveled distance, the time needed for transportation of each of the passengers or goods, or the number of buses or trucks required for transportation. The answer to the task may be a plan i.e., a sequence of steps for the system to perform in order to reach the goal state. In case the task cannot be performed the answer from KMARF could be a reason why the task cannot be performed, as well as the possible solution, which could be to increase the number of vehicles.
In brief, our contribution is fourfold:
A model for representing both declarative and procedural knowledge from CPS and ITS domains in a machine-readable form (Section 2.1).
An architecture of a generic framework for knowledge management and automated reasoning (KMARF) (Section 2.2), relying on the introduced knowledge model.
A taxonomy of ITS domain concepts that can be used when reasoning about ITS specific problems (Section 3).
2 Kmarf – Framework for Knowledge Management and Automated Reasoning
In this section, we introduce our Knowledge Management and Automated Reasoning Framework (KMARF) by showing its architecture and the knowledge model that it relies on. KMARF is targeting multiple reasoning problem classes (such as planning, verification and optimization) that can share the same underlying state representation. This enables reusability of knowledge and methods across different domains.
2.1 The Knowledge Model
We model declarative knowledge by describing discrete states of a system. One such state may represent the current state, and the others may describe either previous system states or hypothetical states that the system may end up in the future. In this context we do not strictly apply the notion of time, i.e., the system may change its state instantly. However, the order of states is important as it describes how the system evolves and may explain the reasons behind its progress.
A state is represented by an (implicitly conjunctive) set of predicates expressing the facts known about the state. Each predicate is a compound term that has a form of , where is a predicate’s functor specified as a literal, i.e., a sequence of characters, and are the arguments. The number of arguments is called arity of the predicate. If , then denotes a simple atomic fact. If , then denotes a factual relation between its arguments. The arguments of predicates may include:
numbers denoting literal quantity;
literals denoted by sequence of alphanumeric characters that start from a lower case character that represent objects or concepts in the domain (e.g., may represent “a car” );
compound terms of the form that may have one or many arguments, which can be numbers, literals or compound terms (e.g., may stand for “the velocity of the car is 50 km/h”).
An example shown in Figure 1 contains a state defined using a set of six predicates . The arguments of the first three predicates declare existence of bus , bus stop , and passenger accordingly. The fourth predicate states that passenger is at bus stop . The fifth predicate indicates that bus has 23 available places. The last predicate declares a fact that passenger has been waiting at the bus stop from last two minutes.
The procedural knowledge is modeled as a collection of specifications of potential transitions between states. A transition specification consists of a precondition, a computation, and an action. Precondition and action both have the same syntax as the state, i.e., they are represented as a set of predicates, except the following two differences. Variables denoted by sequences of alphanumeric characters starting from a capital letter are allowed in arguments of predicates and compound terms in both precondition and action. The action predicates are restricted to the set . Intuitively, action predicates denote the procedures performed with a state when a transition is performed. Computation is an ordered list of effect free function calls, i.e. they do not modify the state and are only used during the processing of the transition. The arguments of a function call may be numbers, literals, variables, and functions. If a function returns a value, the last argument of a function call is a variable that holds it. The result of a function call may be used as an argument in subsequent function calls of the computation or in the action.
Example in Figure 2 demonstrates specification of a transition that allows a system to evolve from state to state defined in Figure 1. After matching the precondition, the computation checks if a passenger has been waiting for less than 20 minutes and if there is enough capacity to onboard a passenger, it decreases the bus capacity by 1. The action removes the passenger waiting predicate, updates passenger location and available bus capacity value.
We formalize semantics of literals, compound terms and predicates by associating meanings to their symbols. For example, we use the predicate to model the fact that a passenger is at the bus stop . We say that the meaning of is to represent a close spacial relationship between its two arguments.
We define semantics of our knowledge model in terms of a transition system. Formally, a transition system is a tuple , where is a set of states, is a set of transition specification names, and is set of state transitions (i.e., a subset of ). The fact that is written as , and represents a transition between a source state and a destination state by applying transition specification .
In order for a transition specification to be applied its precondition must match the source state and its computation must succeed. The semantics of matching the precondition with a state are formalized by defining a logical unification between predicates of the precondition and the predicates of the state, as follows. We unify every predicate in the precondition with all the predicates from the state, and use variables substitutions in subsequent unification of the remaining precondition predicates. If the unification of all precondition predicates with the state predicates succeeds, the computed variable substitution is used in the computation and the action, as explained below. Obviously, there can be multiple matches of a precondition with a source state. In this case, every match will produce a transition in given that the corresponding computation succeeds.
The meaning of the computation is evaluation of its functions in the order of specification. We use denotational semantics to define the meaning of a function as a set of ordered tuples
where are function arguments, and is the value returned by given those arguments. A function call is the process of finding such for given that there is a tuple in the definition of for some . If there is no such tuple found, the function call fails. Otherwise, the function call succeeds and the value of is assigned to a corresponding variable. If the returned value is of boolean type, i.e. it belongs to the set , then the function succeeds if and fails if . A computation succeeds if all the function calls in it succeed. Otherwise, the computation fails.
The semantics of the transition action execution are defined by two operations. The first operation instantiates predicates in the action by applying computed variable substitution to them. This means that all the variables in the action predicates are replaced with the corresponding values from the variable substitution. The second operation copies all the predicates from the source state to the destination state, and for every predicate in the action we perform the following procedures on the destination state:
if the instantiated predicate symbol is , then its argument is treated as a predicate, and it is added to the destination state;
if the instantiated predicate symbol is , then its argument is treated as a predicate, and it is removed from the destination state.
After introducing the knowledge model, we can move on describing KMARF. A high-level conceptual view of the architecture of the framework is depicted in Figure 3. The main components of KMARF are the Perception Engine, the Knowledge Base, the Reasoner, the Interpreter and the Actuation Engine.
The Knowledge Base is responsible for representing aspects of the domain under consideration (such as objects or concepts, instances and states) and their relations, in well defined, machine processable syntax and unambiguous semantics. The format of the knowledge stored in the knowledge base complies with the knowledge model introduced in Section 2.1, and one of the possible formats is RDF/OWL. In addition, the knowledge base contains meta-reasoning expertise, as well as model transformation rules that are described below.
The Reasoner is used for solving reasoning problems. By reasoning we mean solving problems related to planning, verification, optimization, etc. By relating a user query to a meta-reasoning expertise111Meta-reasoning is reasoning about reasoning, i.e., it is comprised of computational processes concerned with the operation and regulation of other computational processes within the same entity [Wilson and Keil, 2001]. stored in the knowledge base the Inference Engine draws a conclusion about an appropriate method or a problem solver, and relevant prior knowledge for solving a given problem. For example, if the user query is to reach a certain goal, the inference engine will look up the knowledge base and deduce that a Planner should be used to generate a strategy to reach that goal. Additionally, given that most of the planners accept planning problems in Planning Domain Definition Language (PDDL) [Mcdermott et al., 1998] as their input, the inference engine looks up corresponding model transformation rules that should be applied to formulate the problem in a format understandable by the selected planner. The Interpreter takes the generated strategy and maps it to state changes that it gives to the Actuation Engine, so that it can perform actuation in the real world.
Since the physical world is not entirely predictable KMARF needs to take into consideration that there might be changes in the information stored in the knowledge base. The Perception Engine is responsible, when needed, to push new knowledge from the environment (i.e., predicates) in the knowledge base. Additionally, when executing the strategy the Interpreter works tightly with the Reasoner. In case there are any changes in the expected state of the system the Reasoner sends a replanning request to the Planner.
3 Taxonomy of Its Domain Concepts
This section describes the objects or the concepts of ITS domain and their relation to more generic CPS domain concepts. The concepts are organized in an ontology that has multiple layers of abstraction. We design our ontology by combining high level concepts and cross-domain relationships borrowed from three areas: CPS; Agent-Based Model (ABM); and Systems-of-Systems (SoS). The proposed ontology consists of an Upper Ontology, which contains the CPS, ABM, and SoS concepts and relations and a general ITS Domain Ontology. The general ITS Domain Ontology can be further referenced from ontologies that instantiate transport-domain specific transitions and states, as described in Section 4.
The objective of breaking the ontology into multiple levels is twofold. First, this approach allows to capture and isolate different levels of properties, attributes and relationships. Higher layers provide broader definitions and more abstract concepts, while lower layers are less abstract and can support specific domains and applications with concepts and relations which might not be present in the upper levels. Second, ontologies are expected to change, grow and evolve as new domains and techniques are contemplated in them [Davies et al., 2006]. Leaving the more abstract and general concepts in an upper layer, and the more specific ones in lower layers, reinforces the idea that altering the most general concepts should be avoided, making them less likely to suffer constant modifications that could lead to unnecessary changes throughout the ontology. This is important because ontologies often reuse and extend other ontologies. Updating an ontology without proper care can potentially corrupt the others depending on it and consequently all the systems that use it.
3.1 Upper Ontology
Upper ontologies should be designed to describe general concepts that can be used across all domains. They have a central role in facilitating interoperability among domain specific ontologies, which are built hierarchically underneath the upper and generic layers, and therefore can be seen as specialization of the more abstract concepts.
Figure 4 presents a subset of the proposed upper ontology. Its development was prompted by our use cases in management and control of complex systems-of-systems, and was inspired by other ontologies such as SUMO (Suggested Upper Mergerd Ontology) [Niles and Pease, 2001], and W3C SSN (Semantic Sensor Network Ontology) [Compton et al., 2012].
Some important concepts defined on the proposed general ontology include System, Cyber-Physical System, Agent and CPS Agent. A System is a set of connected parts forming a complex whole that can also be used as a resource by other systems. A Cyber-Physical System is a system with both physical and computational components. They deeply integrate computation, communication and control into physical systems. An Agent is a system that can act on its own, sense the environment and produce changes in the world. When an agent is embedded into a cyber-physical system it is called a CPS Agent, or cyber-physical agent.
Important for mathematical desciptions of interrelations between systems are the elements Arc, Node and Graph. Where an Arc is any element of a graph that connects two Nodes, while a Graph is a set of Nodes connected by Arcs.
The concept of System can be further expanded by a number of attributes, such as Capacity, Role and Capability that can also have relationships among them. The System itself is represented within the Declarative Knowledge as an Object. Affordance is a property the defines the tasks that can be done on a specific System, while Capability defines the set of tasks the system can perform. Systems can also have Constraints, which in turn are related to KPIs that are used to measure whether such constraints are satisfied.
The higher level of the proposed ontology also provides definitions and relationships between the main Knowlegde Base concepts, the Declarative and Procedural Knowledge. In our knowledge model, a Transition is a Procedural Knowledge concept that determines how to achieve a certain state (Action) given that an agent observes a particular state (Precondition) as being true in the world and there is an ordered list of effect free function calls in that state (Computation). Meanwhile, both Precondition and Action have a Predicate Set that is directly related to the concept of State from the Declarative Knowledge. The Goal State, which is an specification of State, is related to the concepts of Task and Workflow from the Procedural Knowledge. Where a Workflow is defined as sequence of Tasks, which in turn is defined by a sequence of Goal States assigned to a single Agent. Figure 5 presents the main elements of the knowledge base modeling.
3.2 ITS Domain Ontology
With the support of the presented upper ontology model, in this section we propose an ITS domain specific ontology, as depicted in Figure 6. One of the central concepts within the ITS domain is the Transport Agent, that extends Agent from the upper ontology. The Transport Agent encompasses agents that are capable of transporting some entity, ranging from physical goods to virtual data. Some important concepts from the upper layers that apply to the Transport Agent include Dynamics and Capacity, among others. Transport Agents in turn are strongly related to the Abstract concept of Transportation Mode which defines the type of transportion scenario (e.g., Roads, Rail, Telco).
Another important concept is the Transportation Infrastructure which encompasses all elements required by a Transportation Mode, such as Routes, Tracks and Transportation Networks. Most elements within the Transportation Infrastructure are extensions of Graph, Arc and Node, abstract concepts from the Upper ontology. Therefore, by using high level graph definitions it is possible to define most of the transportation infrastructure in an ITS Domain. A node inside the transportation infrastructure is referred to as a POI (Point of Interest) and it can be any desired location within the Transportation Network (e.g., a crossing, a specific point in the route, coordinate, a warehouse, a bus stop). A Traffic Semaphore is modeled as a generic Actuator that is used to control and regulate traffic and it can be applied in any transportation scenario.
A Transportable Entity encompasses any element that can be transported by a Transport Agent, such as regular Cargo or network Data. A typical Passenger is also a Transportable Entity and extends the upper ontology concept of Human.
This section describes current progress towards a prototype implementation of the KMARF architecture illustrated in Figure 3. The implementation targets a large problem area in ITS known as “transportation planning”, which we define as the schedules generated for a set of vehicles to pickup and alight people or cargo along one or more routes, within a given amount of time (see also Section 1). The transport planning problem includes a set of connected vehicles, for example buses or trucks, and a central coordinating function that computes the schedule and transmits it to these vehicles222Correct interpretation of the schedule rests on the vehicles, which can be partially or completely autonomous or they may also have human drivers.. In this implementation we assume that the Inference Engine component has already deduced that a Planner should be used to solve the transportation planning problem, using meta-reasoning expertise and user query data supplied from the Knowledge Base and the user interface components respectively.
Figure 7 shows the components of the implemented system. One of the components implemented is the Knowledge Base, which contains model transformation rules for PDDL language as well as states & transition models that are based on the structure defined in Section 3 and contain information for the particular transport planning problem. The models are described using semantic web technologies and are based on the W3C Web Ontology Language (OWL) [Hitzler et al., 2012] and stored in Turtle format [Beckett et al., 2014]. The other component is a PDDL Generator, which, given the transformation rules, states & transition files as input, generates a problem and domain file in PDDL language. PDDL Generator is implemented in Java [James et al., 2015] and uses Apache Jena [Apache Foundation, 2016] for parsing the Turtle-formatted input from the knowledge base. Additionally, Eclipse Jetty [Eclipse Foundation, 2016] provides a Representational State Transfer (REST) API for triggering PDDL file generation, and defining custom states and transition models. More specifically:
The API allows human experts (for example knowledge engineers) to specify a transport logistics problem, by adding a new initial and goal state in the knowledge base, in the form of astate file, and a set of transitions with precondition, computation and action parts in the form of a transition file. These two files are jointly used by the PDDL Generator software component in order to generate a new schedule. The state file defines the agents, vehicles and routes, contains information about the initial state of the system (e.g., the location of agents in the route, the route and its waypoints, the time required for vehicles to travel a route, etc.) and defines goal conditions (e.g., all agents are serviced). The transitions file describes intermediate transitions that are used by the planner to reach the specified goal state from the initial state. An example of such plan can be found at [KMARF authors, 2016].
The API also provides means for triggering generation of a new PDDL problem and domain file given the above input on request of a human or other system. Typically this request is created from a customer (e.g., human operator, or an automated fleet management system). Once generated, the files are assigned Universal Resource Identifiers (URIs). An external system can subsequently perform Hypertext Transfer Protocol (HTTP) GET requests using the URI references to retrieve the files. An example of such a system can be a PDDL solver333In its current form, the API does not support adding of new model transformation rules, which means that only PDDL language is supported. In the future however, we plan to expand the functionality by adding support for “pluggable” problem solving expertise files, both for PDDL and Prolog.. For this implementation, we use a third-party solver named “OPTIC”, originally developed by Benton et al [Benton et al., 2012].
The authors have released the current implementation of the Knowledge Base and PDDL Generator as open-source, available for the community to use [KMARF authors, 2016]. Currently, there is no component to support interaction with the real world, both for triggering the planning process, but also for actuating real-world connected devices (e.g. buses or sensors) upon execution of the generated plan, however this is planned work. In its current state the implementation can be used for rapid prototyping of transportation planning functions. In addition to the software itself, the “Upper”, “ITS”, “PDDL model transformation rules” and a set of common reusable transitions and state ontologies are provided in Turtle format.
In this section, we evaluate the implemented system in terms of reusability. Given the practical limitation of not having a real or simulated testbed of connected vehicles as described in the previous section, it is not yet possible to evaluate some aspects of the system (e.g. efficiency, performance, etc.) in realistic conditions. What we describe instead is an evaluation of the benefits this system brings on “cost-of-design” (COD). We define COD as the effort required for formalizing the knowledge required in order for the system to start the automated transport schedule generation process. Naturally, there exists a direct relationship between the amount of knowledge to be formalized and the effort required, meaning that the more the amount of non-formalized transport plan-related knowledge, the more the required effort (e.g. in terms of time, human resource allocation, money etc.). Table 1 shows the different aspects of knowledge required for the transportation plan. As described in Section 3, we view the transport network as a graph, with Points of Interest (POIs) as vertices on the graph and POI-connecting roads as edges. Note that the table references “transportable entities”. These entities can be passengers or cargo, depending on the use case. An interesting observation can be made regarding transitions, which are similar regardless of the route, number and type of transport agents (e.g., buses or trucks) and transportable entities. Therefore, reusing these actions across different transport planning use cases, and storing them as part of the “transition library” (see Figure 7), is something that can potentially reduce COD.
|Route Specification||Specification of the route(s) graph(s), which includes vertices (coordinates), edges (roads) and edge-traversal costs, metrics (e.g. time to travel, fuel spent, etc.)|
|Transport Agents||Number of vehicles, Vehicle IDs (e.g. VIN codes) and vehicle capacity|
|Transportable Entities||Number and ID of transportable entities|
|Starting Conditions||Where are the vehicles located, where are the passengers located|
|Transition definitions||What transitions the transport vehicles perform (which include preconditions, optionally computations and actions) . There are currently three actions available in the library: pickup-agent, drop-agent, move-to-next-coordinate|
|Goal conditions||Final state: what criteria needs to be satisfied in order for the transport service to conclude on the specified route (usually this means that all agents are picked up, off-boarded in specific parts of the route).|
To measure gains in COD, we have defined a simple metric we named “reusability index”. This metric is the ratio of reused entities, versus the ratio of total entities in the knowledge input to PDDL Generator to generate the plan. For the bus use case, this ratio was 0.364 and for the truck use case, the ratio was 0.251. This means that out of the total number entities created for each of the bus and truck use cases, 36.4% and 25.1% of them were already available in the library respectively. We observe that both indices are relatively significant, while the difference between the two is mainly attributed to the difference of the route specification entities, as agents in both routes followed different paths. One observation that we have made a posteriori to our measurements, is that the reusability index can be larger, if the route specification is part of the PDDL Generator, which can automatically generate the specification using data from a mapping service in conjunction with a routing library. The knowledge engineer could then only specify the desired waypoints (e.g., the bus stops), and the routes and graphs would be generated automatically by the PDDL Generator. We are currently in the process of evaluating different mapping services such as Google Maps and OpenStreetMap and open-source routing libraries such as GraphHopper and DirectionsService for their applicability in our implementation.
6 Related Work
A small number of frameworks exist that support the formalization of declarative and procedural knowledge. In [Vaquero et al., 2011]
the authors present an overview of tools and methods in KEPS (Knowledge Engineering for Planning and Scheduling) area. They classify the process of knowledge engineering into six phases: Requirements Specification, Knowledge Modeling, Model Analysis, Deploying Model to Planner, Plan Synthesis and Plan Analysis and Post-Design. Then they provide a list of tools and methods that are used in the literature at each of those phases. When commenting on these tools, the authors mention that to that moment none of them treated differently the knowledge encapsulated in the planning problem and in the surrounding domain, which makes it harder to reuse knowledge in other domains. In contrast, the goal of KMARF is to exactly support reuse of knowledge, by relying on a common knowledge model across different domains.
KEWI [Wickler et al., 2015] is a knowledge engineering tool that has been designed to help the formalization of a procedural knowledge used for planning problems. The idea is to enable domain experts to encode knowledge themselves, rather than using knowledge engineers. The conceptual model of KEWI consists of three main parts: an ontology for describing entities that occur in the captured domain, a model of primitive rules (called actions) that can be executed by the system and high-level methods for accomplishing complex tasks. KEWI is specifically developed for planning problems, whereas our framework targets different classes of reasoning problems by sharing the same underlying state representation, as formalized in our knowledge model.
The itSIMPLE tool was first proposed in Vaquero et al. [Vaquero et al., 2005] for supporting the creation of generic planning systems by integrating domain and procedural knowledge, while automatically generating PDDL [Mcdermott et al., 1998] files. They propose using UML for modeling domain knowledge and Petri Nets for modeling the procedural knowledge regarding feasible state transitions, while using XML as an internal language. The tool has since evolved [Vaquero et al., 2012] and has been demonstrated in several cross-domain use cases, such as petroleum plant operations planning [Sette et al., 2008].
In fact, when developing a framework that intends to combine knowledge engineering with multiple declarative and procedural reasoning classes (such as semantic reasoning and planning) one important choice to make is the language (or languages) used to model problems and the required information for solving them. In [Anis et al., 2014] the authors compare three classical languages for modeling problems in the cyber-physical production systems, namely: Prolog, Timed Automata [Alur and Dill, 1994] and PDDL. Each language has its pros and cons, and can be used for solving different reasoning problems. In KMARF we provide a “glue” knowledge model that depending on the reasoning problem to be solved is intended to be translated to other languages, such as Prolog, Timed Automata or PDDL.
Another related field of work is the area of study dealing with cognitive architectures, which are frameworks that specify the underlying infrastructure for an intelligent system and include aspects of a cognitive agent that are constant over time and across different application domains [Langley et al., 2009]. As such, they often include memory mechanisms for storing and processing different types of knowledge, e.g., procedural, semantic and episodic knowledge. This knowledge is used as the basis for action selection, which can be reactive, based on planning or a hybrid of these two. There are many cognitive architectures [Samsonovich, 2010], each with its own strategies and constraints when dealing with planning, acting, sensing and knowledge management. Some of the most notorious ones are SOAR [Laird, 2012] and CLARION [Licato et al., 2014]. In SOAR, for instance, procedural knowledge is represented by if-then
rules in a manner somewhat similar to the STRIPS (Stanford Research Institute Problem Solver)/PDDL formalism, while semantic knowledge about the world is represented in a graph held in working memory. CLARION on the other hand employs a hybrid approach by integrating a rule based system with artificial neural networks. KMARF has a number of similarities when compared to cognitive architectures. For instance, KMARF intends to improve re-usability across different domains, which is also a natural feature in biologically-based cognitive architectures. Another similarity is in knowledge representation since most cognitive architectures also organize knowledge into either declarative memory or procedural memory. Finally, intelligent agents based on cognitive architectures are also known for performing a range of reasoning tasks such as planning and verification.
In our effort to provide KMARF with the capability to reason about problems in different domains, we have included in it a CPS based ontology which describes the relationships among CPS concepts. This approach of using ontologies to increase generality is well known and improves KMARF’s capability to reuse knowledge in different domains. One example of related work can be seen in covering Multi-Agent System (MAS) and CPS, as has been explored in [Lin et al., 2010], where a MAS-based semantic modeling approach to CPS in the water distribution domain was described. The goal was to dynamically integrate information from sensor networks with semantic services to support real-time decision-making. Two challenges were effectively addressed when designing the ontology: model accuracy and integration of physical and cyber components; and pinpointing inter-dependencies between CPS components.
In this paper, we have presented concepts and strategies for creating a model for representing knowledge from CPS and ITS domains in a machine-readable form, as well as KMARF, a generic framework for knowledge management and automated reasoning, integrating this knowledge model. The motivation behind the creation of the knowledge model and KMARF is to demonstrate how knowledge and methods across different domains can be reused to reduce costs of development, deployment and operation of such systems. We have illustrated the usage of parts of KMARF on an implementation of a generating transport planner, which can be used to rapidly prototype schedulers of vehicle transport fleets.
There are currently many different abstractions in the literature for handling the design of complex real-world systems, including Cyber-Physical Systems, Agent-Based Model, and Systems-of-Systems engineering, each of them accompanied by their own formalisms and theory. Our vision is that we can provide better support for linking cross-domain use case applications by integrating the common elements of those formalisms via our knowledge model.
In the implementation of KMARF we have so far progressed into the development of an ontology for the knowledge base using OWL Web Ontology Language, and we have used the framework to automatically generate PDDL files, feed them into a planner, and create plans. Since, KMARF has a much bigger vision than solving planning problems, in the future we plan on studying how our knowledge model can be correlated to other formalisms e.g., Timed Automata.
As future work, we also plan to study how meta-reasoning can help KMARF to determine which prior knowledge and algorithms are relevant when a new, problem or unforeseen instance arrives. Such instance corresponds to the current state of the world, along with all current available sensory information. Initially, the system only knows how to solve problems it has seen before and had previously found reductions that could be solved separately using known procedures. If it can not find such a reduction for a new instance, it must recur to space state exploration for generating a sequence of state transitions that either lead to the specified goal state or to a better state in which either the system knows how to proceed with further reductions or declares the problem as intractable under its current knowledge.
- [Alur and Dill, 1994] Alur, R. and Dill, D. L. (1994). A theory of timed automata. Theoretical Computer Science, 126(2):183 – 235.
- [Anis et al., 2014] Anis, A., Schäfer, W., and Niggemann, O. (2014). A comparison of modeling approaches for planning in Cyber Physical Production Systems. In Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA), pages 1–8.
- [Apache Foundation, 2016] Apache Foundation (2016). Apache Jena: A free and open source Java framework for building Semantic Web and Linked Data applications.
- [Baral, 2003] Baral, C. (2003). Knowledge Representation, Reasoning, and Declarative Problem Solving. Cambridge University Press, New York, NY, USA.
- [Beckett et al., 2014] Beckett, D., Tim, B.-L., Eric, P., and Gavin, C. (2014). RDF 1.1 Turtle: Terse RDF Triple Language. W3C Recommendation.
- [Benton et al., 2012] Benton, J., Coles, A., and Coles, A. (2012). Temporal Planning with Preferences and Time-Dependent Continuous Costs. In International Conference on Automated Planning and Scheduling.
- [Compton et al., 2012] Compton, M., Barnaghi, P., Bermudez, L., Garcia-Castro, R., Corcho, O., Cox, S., Graybeal, J., Hauswirth, M., Henson, C., Herzog, A., Huang, V., Janowicz, K., Kelsey, W. D., Phuoc, D. L., Lefort, L., Leggieri, M., Neuhaus, H., Nikolov, A., Page, K., Passant, A., Sheth, A., and Taylor, K. (2012). The SSN ontology of the W3C semantic sensor network incubator group. Web Semantics: Science, Services and Agents on the World Wide Web, 17:25 – 32.
- [Davies et al., 2006] Davies, J., Studer, R., and Warren, P. (2006). Semantic Web technologies: trends and research in ontology-based systems. John Wiley & Sons, Chichester, West Sussex, PO19 8SQ, England.
- [Eclipse Foundation, 2016] Eclipse Foundation (2016). Jetty: Open-Source Servlet Engine and HTTP Server.
- [Ghallab et al., 2014] Ghallab, M., Nau, D., and Traverso, P. (2014). The actor’s view of automated planning and acting: A position paper. Artificial Intelligence, 208:1 – 17.
- [Hitzler et al., 2012] Hitzler, P., Krötzsch, M., Parsia, B., Patel-Schneider, P. F., and Rudolph, S. (2012). OWL 2 Web Ontology Language Primer (Second Edition). W3C Recommendation.
- [IEEE Standards Association, 2010] IEEE Standards Association (2010). 802.11 p-2010 - IEEE standard for information technology - Local and metropolitan area networks – Specific requirements – Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicular Environments.
- [James et al., 2015] James, G., Bill, J., Guy, S., Gilad, B., and Alex, B. (2015). The Java Language Specification: Java SE 8 Edition. Oracle.
- [Karagiannis et al., 2011] Karagiannis, G., Altintas, O., Ekici, E., Heijenk, G., Jarupan, B., Lin, K., and Weil, T. (2011). Vehicular networking: A survey and tutorial on requirements, architectures, challenges, standards and solutions. IEEE Communications Surveys Tutorials, 13(4):584–616.
- [KMARF authors, 2016] KMARF authors (2016). Prototype PDDL Generator Public Repository. https://github.com/SSCIPaperSubmitter/ssciPDDLPlanner.
- [Laird, 2012] Laird, J. (2012). The Soar cognitive architecture. MIT Press, Cambridge, Mass. ; London, England.
- [Langley et al., 2009] Langley, P., Laird, J. E., and Rogers, S. (2009). Cognitive architectures: Research issues and challenges. Cognitive Systems Research, 10(2):141–160.
- [Licato et al., 2014] Licato, J., Sun, R., and Bringsjord, S. (2014). Structural representation and reasoning in a hybrid cognitive architecture. In 2014 International Joint Conference on Neural Networks (IJCNN), pages 891–898.
- [Lin et al., 2010] Lin, J., Sedigh, S., and Miller, A. (2010). Modeling cyber-physical systems with semantic agents. In Computer Software and Applications Conference Workshops (COMPSACW), 2010 IEEE 34th Annual, pages 13–18.
[Mcdermott et al., 1998]
Mcdermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld,
D., and Wilkins, D. (1998).
PDDL - The Planning Domain Definition Language.
Technical report, CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control.
- [Niles and Pease, 2001] Niles, I. and Pease, A. (2001). Towards a Standard Upper Ontology. In Proceedings of the International Conference on Formal Ontology in Information Systems - Volume 2001, FOIS ’01, pages 2–9, New York, NY, USA. ACM.
- [Samsonovich, 2010] Samsonovich, A. V. (2010). Toward a unified catalog of implemented cognitive architectures. In Proceedings of the 2010 Conference on Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society, pages 195–244, Amsterdam, The Netherlands, The Netherlands. IOS Press.
- [Sette et al., 2008] Sette, F. M., Vaquero, T. S., Park, S. W., and Silva, J. R. (2008). Are Automated Planners up to Solve Real Problems? IFAC Proceedings Volumes, 41(2):15817–15824.
- [Vaquero et al., 2011] Vaquero, T. S., Silva, J. R., and Beck, J. C. (2011). A brief review of tools and methods for knowledge engineering for planning & scheduling. In Proceedings of the ICAPS Workshop on Knowledge Engineering for Planning and Scheduling (KEPS), pages 7–14, Freiburg, Germany.
- [Vaquero et al., 2012] Vaquero, T. S., Tonaco, R., Costa, G., Tonidandel, F., Silva, J. R., and Beck, J. C. (2012). itSIMPLE4.0: Enhancing the modeling experience of planning problems. In System Demonstration–Proceedings of the 22nd International Conference on Automated Planning & Scheduling (ICAPS-12), pages 11–14.
- [Vaquero et al., 2005] Vaquero, T. S., Tonidandel, F., and Silva, J. R. (2005). The itSIMPLE tool for modeling planning domains. Proceedings of the First International Competition on Knowledge Engineering for AI Planning, Monterey, Califormia, USA.
- [Wickler et al., 2015] Wickler, G., Chrpa, L., and McCluskey, T. L. (2015). Ontological Support for Modelling Planning Knowledge, pages 293–312. Springer International Publishing, Cham.
- [Wilson and Keil, 2001] Wilson, R. A. and Keil, F. C. (2001). The MIT encyclopedia of the cognitive sciences. MIT press.