1 Introduction
Research in knowledge representation (KR) faces two major problems. First of all, a large variety of different languages for representing knowledge  each of them useful for particular types of problems  has been produced. There are many situations where the integration of the knowledge represented in diverse formalisms is crucial, and principled ways of achieving this integration are needed. Secondly, most of the tools providing reasoning services for KR languages were developed for offline usage: given a knowledge base (KB) computation is oneshot, triggered by a user, through a specific query or a request to compute, say, an answer set. This is the right thing for specific types of applications where a specific answer to a particular problem instance is needed at a particular point in time. However, there are different kinds of applications where a reasoning system is continuously online and receives information about a particular system it observes. Consider an assisted living scenario where people in need of support live in an apartment equipped with various sensors, e.g., smoke detectors, cameras, and body sensors measuring relevant body functions (e.g., pulse, blood pressure). A reasoning system continuously receives sensor information. The task is to detect emergencies (health problems, forgotten medication, overheating stove,…) and cause adequate reactions (e.g., turning off the electricity, calling the ambulance, ringing an alarm). The system is continuously online and has to process a continuous stream of information rather than a fixed KB.
This poses new challenges on KR formalisms. Most importantly, the available information continuously grows. This obviously cannot go on forever as the KB needs to be kept in a manageable size. We thus need principled ways of forgetting/disregarding information. In the literature one often finds sliding window techniques [13] where information is kept for a specific, predefined period of time and forgotten if it falls out of this time window. We believe this approach is far too inflexible. What is needed is a dynamic, situation dependent way of determining whether information needs to be kept or can be given up. Ideally we would like our online KR system to guarantee specific response times; although it may be very difficult to come up with such guarantees, it is certainly necessary to find means to identify and focus on relevant parts of the available information. Moreover, although the definition of the semantics of the underlying KR formalism remains essential, we also need to impose procedural aspects reflecting the necessary modifications of the KB. This leads to a new, additional focus on runs of the system, rather than single evaluations.
Nonmonotonic multicontext systems (MCSs) [5] were explicitly developed to handle the integration problem. In a nutshell, an MCS consists of reasoning units  called contexts for historical reasons [15]  where each unit can be connected with other units via socalled bridge rules. The collection of bridge rules associated with a context specifies additional beliefs the context is willing to accept depending on what is believed by connected contexts. The semantics of the MCS is then defined in terms of equilibria. Intuitively, an equilibrium is a collection of belief sets, one for each context, which fit together in the sense that the beliefs of each context adequately take into account what the other contexts believe.
The original framework was aimed at modeling the flow of information among contexts, consequently the addition of information to a context was the only possible operation on KBs. To capture more general forms of operations MCSs were later generalized to so called managed MCSs (mMCSs) [7]. The main goal of this paper is to demonstrate that this additional functionality makes managed MCSs particularly wellsuited as a basis for handling the mentioned problems of online reasoning systems as well. The main reason is that the operations on the knowledge bases allow us to control things like KB size, handling of inconsistent observations, focus of attention, and even whether a particular context should be idle for some time.
However, to turn mMCSs into a reactive online formalism we first need to extend the framework to accommodate observations. We will do so by generalizing bridge rules so that they have access not only to belief sets of other contexts, but also to sensor data. This allows systems to become reactive, that is, to take information about a dynamically changing world into account and to modify themselves to keep system performance up.
The rest of the paper is organized as follows. We first give the necessary background on mMCSs. We then extend the framework to make it suitable for dynamic environments, in particular we show how observations can be accommodated, and we define the notion of a run of an MCS based on a sequence of observations. The subsequent sections address the following issues: handling time and the frame problem; dynamic control of KB size; focus of attention; control of computation (idle contexts). We finally discuss the complexity of some important decision problems.^{1}^{1}1The paper is based on preliminary ideas described in the extended abstract [4] and in [12]. However, the modeling techniques as well as the formalization presented here are new. A key difference in this respect is the handling of sensor data by means of bridge rules.
2 Background: MultiContext Systems
We now give the necessary background on managed MCSs [7] which provides the basis for our paper. We present a slightly simplified variant of mMCSs here as this allows us to better highlight the issues relevant for this paper. However, if needed it is rather straightforward (albeit technically somewhat involved) to extend all our results to the full version of mMCSs. More specifically we make two restrictions: 1) we assume all contexts have a single logic rather than a logic suite as in [7]; 2) we assume that management functions are deterministic.
In addition we will slightly rearrange the components of an mMCS which makes them easier to use for our purposes. In particular, we will keep bridge rules and knowledge bases separate from their associated contexts. The latter will change dynamically during updates, as we will see later, and it is thus convenient to keep them separate. Bridge rules will be separated due to technical reasons (i.e., better presentation of the later introduced notion of a run).
An mMCS builds on an abstract notion of a logic as a triple , where is the set of admissible knowledge bases (KBs) of , which are sets of KBelements (“formulas”); is the set of possible belief sets, whose elements are beliefs; and is a function describing the semantics of by assigning to each KB a set of acceptable belief sets.
Definition 1
A context is of the form where

is a logic,

is a set of operations,

is a management function.
For an indexed context we will write , , and to denote its components.
Definition 2
Let be a tuple of contexts. A bridge rule for over () is of the form
(1) 
such that and every () is an atom of form , where , and is a belief for , i.e., for some .
For a bridge rule , the operation is the head of , while is the body of .
Definition 3
A managed multicontext system (mMCS) is a triple consisting of

a tuple of contexts ,

a tuple , where each is a set of bridge rules for over ,

a tuple of KBs such that .
A belief state for consists of belief sets , . Given a bridge rule , an atom is satisfied by if and a negated atom is satisfied by if . A literal is an atom or a negated atom. We say that is applicable wrt. , denoted by , if every literal is satisfied by . We use to denote the heads of all applicable bridge rules of context wrt. .
The semantics of an mMCS is then defined in terms of equilibria, where an equilibrium is a belief state satisfying the following condition: the belief set chosen for each context must be acceptable for the KBs obtained by applying the management function to the heads of applicable bridge rules and the KB associated with the context. More formally, for all contexts : let be the belief set chosen for . Then is an equilibrium if, for ,
Management functions allow us to model all sorts of modifications of a context’s KB and thus make mMCSs a powerful tool for describing the influence contexts can have on each other.
3 Reactive MultiContext Systems
To make an mMCS suitable for reactive reasoning in dynamic environments, we have to accomplish two tasks:

we must provide means for the MCS to obtain information provided by sensors, and

we have to formally describe the behavior of the MCS over time.
Let us first show how sensors can be modeled abstractly. We assume that a sensor is a device which is able to provide new information in a given language specific to the sensor. From an abstract point of view, we can identify a sensor with its observation language and a current sensor reading, that is, where . Given a tuple of sensors , an observation for (observation for short) consists of a sensor reading for each sensor, that is, where for , .
Each context must have access to its relevant sensors. Contexts already have means to obtain information from outside, namely the bridge rules. This suggests that the simplest way to integrate sensors is via an extension of the bridge rules: we will assume that bridge rules in their bodies can not only refer to contexts, but also to sensors.
Definition 4
The applicability of bridge rules now also depends on an observation:
Definition 5
Let be a tuple of sensors and a observation. A sensor atom is satisfied by if ; a literal is satisfied by if .
Let be an rMCS with sensors and a belief state for . A bridge rule in is applicable wrt. and , symbolically , if every context literal in is satisfied by and every sensor literal in is satisfied by .
Instead of we use to define an equilibrium of an rMCS in a similar way as for an mMCS:
Definition 6
Let be an rMCS with sensors and a observation. A belief state for is an equilibrium of under if, for ,
Definition 7
Let be an rMCS with sensors , a observation, and an equilibrium of under . The tuple of KBs generated by is defined as . The pair is called full equilibrium of under .
We now introduce the notion of a run of an rMCS induced by a sequence of observations:
Definition 8
Let be an rMCS with sensors and a sequence of observations. A run of induced by is a sequence of pairs such that

is a full equilibrium of under ,

for with , is a full equilibrium of under .
To illustrate the notion of a run, let’s discuss a simple example. We want to model a clock which allows other contexts to add time stamps to sensor information they receive. We consider two options. We will first show how a clock can be realized which generates time internally by increasing a counter whenever a new equilibrium is computed. We later discuss a clock based on a sensor having access to “objective” time. In both cases we use integers as time stamps.
Example 1
Consider a context whose KBs (and belief sets) are of the form for some integer . Let . Assume the single bridge rule of the context is , which intuitively says time should be incremented whenever an equilibrium is computed. The management function is thus defined as
for each . Since the computation of the (full) equilibrium is independent of any other contexts and observations, the context just increments its current time whenever a new equilibrium is computed. Each run of an rMCS with context will thus contain for the sequence of belief sets , , . The example illustrates that the system may evolve over time even if there is no observation made at all.
It is illustrative to compare this with a context which is like the one we discussed except for the bridge rules which now are the instances of the schema
The management function correspondingly becomes
for all . Note that in this case no equilibrium exists! The reason for this is that by replacing with the precondition for the rule sanctioning this operation goes away. Special care thus needs to be taken when defining the operations.
In the rest of the paper we will often use an alternative approach where “objective” time is entered into the system by a particular sensor . In this case each update of the system makes time available to each context via the current sensor reading of .
In Example 1 we already used a bridge rule schema, that is, a bridge rule where some of the parts are described by parameters (denoted by uppercase letters). We admit such schemata to allow for more compact representations. A bridge rule schema is just a convenient abbreviation for the set of its ground instances. The ground instances are obtained by replacing parameters by adequate ground terms. We will admit parameters for integers representing time, but also for formulas and even contexts. In most cases it will be clear from the discussion what the ground instances are, in other cases we will define them explicitly. We will also admit some kind of basic arithmetic in the bridge rules and assume the operations to be handled by grounding, as is usual, say, in answer set programming. For instance, the bridge rule schema
which we will use to handle the frame problem in the next section has ground instances , , etc.
Although in principle parameters for integers lead to an infinite set of ground instances, in our applications only ground instances up to the current time (or current time plus a small constant, see Sect. 6) are needed, so the instantiations of time points remain finite.
In the upcoming sections we describe different generic modeling techniques for rMCSs. For concrete applications, these techniques can be refined and tailored towards the specific needs of the problem domain at hand. To demonstrate this aspect, we provide a more specific example from an assisted living application.
Example 2
Although Bob suffers from dementia, he is able to live in his own apartment as it is equipped with an assisted living system that we model by means of an rMCS. Assume Bob starts to prepare his meal. He leaves the kitchen to go to the bathroom. After that, he forgets he has been cooking, goes to bed and falls asleep. The rMCS should be able to recognize a potential emergency based on the data of different sensors in the flat that monitor, e.g., the state of the kitchen equipment and track Bob’s position.
Our rMCS has three contexts and sensors . is the kitchen equipment context that monitors Bob’s stove. Its formulas and beliefs are atoms from representing the stove’s power status (on/off) and a qualitative value for its temperature (cold/hot). The logic of has a very simple semantics in which every knowledge base has only one accepted belief set coinciding with the formulas of , i.e., . The bridge rules for over are
that react to switching the stove on or off, registered by sensor , respectively read numerical temperature values from sensor
and classify the temperature value as cold or hot. The management function
ensures that the stove is considered on when it is switched on or when it is not being switched off and already considered on in the old knowledge base . Otherwise, the KB constructed by the management function contains the atom . Context keeps track of Bob’s position. The language of sensor is given by and nonempty sensor readings of signal when Bob has changed rooms. The semantics of is also and its bridge rules are given by the schema
The management function writes Bob’s new position into the KB whenever he changes rooms and keeps the previous position, otherwise. is the context for detecting emergencies. It is implemented as an answerset program, hence the acceptable belief sets of are the answer sets of its KBs. The bridge rules of do not refer to sensor data but query other contexts:
The answerset program is given by the rule
The management function of that adds information from the bridge rules temporarily as input facts to the context’s KB is given by
Consider the sequence of observations with for , , , , , , and for all other . Then, is a full equilibrium of under , where
and equals except for the last component which is . Moreover, is a run of induced by , where
4 Handling sensor data
In this section we discuss how to model an rMCS where possibly inconsistent sensor data can be integrated into a context . To this end, we add a time tag to the sensor information and base our treatment of time on the second option discussed in the last section, that is, we assume a specific time sensor that yields a reading of the actual time of the form where is an integer.
Let be the sensors which provide relevant information for in addition to . Then will have bridge rules of the form
where the operation is meant to add new, time tagged information to the context.
We assume the readings of a single sensor at a particular time point to be consistent. However, it is a common problem that the readings of different sensors may be inconsistent with each other wrt. some contextdependent notion of inconsistency. To handle this we foresee a management function that operates based on a total preference ranking of the available sensors. The third argument of the operation provides information about the source of sensor information and thus a way of imposing preferences on the information to be added. Without loss of generality assume , that is, sensor has highest priority.
Now let be the set of addoperations in the heads of bridge rules active in belief state . We define
and for we let
Finally, we define .
This shows how the management function can solve conflicts among inconsistent sensor readings based on preferences among the sensors. Of course, many more strategies of integrating inconsistent sensor data can be thought of which we are not going to discuss in the paper. Please also note that the bridge rules do not necessarily have to pass on sensor information as is to the context. They may as well provide the context with some abstraction of the actual readings. For instance, the sensor temperature information may be transformed into qualitative information by a rule schema like
We next present a way to address the frame problem using bridge rules when sensors are not guaranteed to provide complete information about the state of the environment in each step. In this case we want to assume, at least for some of the atoms or literals observed at time which we call persistent, that they also hold at time .
Assume is some persistent observable property. Persistence of is achieved by the following bridge rule schema:
Please note that in order to avoid nonexistence of equilibria as discussed at the end of Sect. 3 the use of this rule schema for the frame problem presupposes that information about valid at time remains available and is not deleted by any other bridge rule.
5 Selective forgetting and data retention
To illustrate our approach we discuss in this section a context which can be used for emergency detection in dynamic environments. Assume there are potential emergencies we want the context to handle. The role of is to check, based on the observations made, whether one or more of the emergencies are suspected or confirmed. Based on information about potential emergencies adjusts the time span observations are kept. This is the basis for intelligent forgetting based on dynamic windows.
We do not make any assumption about how works internally apart from the following:

may signal that emergency is suspected () or confirmed ().

has information about default, respectively actual window sizes for different observations (, ), and

about the number of time steps observations are relevant for particular emergencies ().
Given facts of the form mentioned above, here is a possible collection of bridge rules for the task. The operation sets the window size to a new value, deleting the old one. To signal an alarm, information is added to the context KB via the operation .
Finally, we have to make sure deletions of observations are performed in accordance with the determined window sizes:
The management function just performs additions and deletions on the context KB. Since additions always are tagged with the current time, whereas deletions always refer to an earlier time, there can never be a conflict.
We have so far described a form of focusing where a time window is extended based on a specific suspected event. The next example shows a different form of focusing where specific information is generated and kept only during there is a potential danger in a particular room.
Example 3
Continuing Example 2 we show how context can focus on specific rooms if there is a potential emergency. For the kitchen there is a threat if the stove is on, and it then becomes important to track whether someone is in the kitchen. Assume has a potential belief expressing the stove is since . Focusing on the kitchen can be modeled by following the ASPrule in ’s KB:
In addition we will need a bridge rule, which keeps track whether Bob is absent from a room in case that room is in the current focus:
as well as a bridge rule to forget the absence in a room if it is no longer necessary. There the delAll operator removes all occurrences of absence with respect to a given room from the KB of the context.
With those modifications it is possible to generate an alert if Bob was too long away from the kitchen although the stove is active.
6 Control of computation
In this section we show how it is possible  at least to some extent  to control the effort spent on the computation of particular contexts. We introduce a specific control context which decides whether a context it controls should be idle for some time. An idle context just buffers sensor data it receives, but does not use the data for any other computations.
Let’s illustrate this continuing the discussion of Sect. 5. Assume there are different contexts for detecting potential emergencies as described earlier. The rMCS we are going to discuss is built on an architecture where each detector context , is connected via bridge rules with the control context. receives information about suspected emergencies and decides, based on this information, whether it is safe to let a context be idle for some time.
We now address the question what it means for a detector context to be idle. A detector context receives relevant observations to reason whether an emergency is suspected or confirmed. In case is idle, we cannot simply forget about new sensor information as it may become relevant later on, but we can buffer it so that it does not have an effect on the computation of a belief set, besides the fact that a buffered information shows up as an additional atom in the belief set which does not appear anywhere in the context’s background knowledge.
To achieve this we have to modify ’s original bridge rules by adding, to the body of each rule, the context literal . This means that the bridge rules behave exactly as before whenever the control context does not decide to let be idle.
For the case where is idle, i.e. where the belief set of contains , we just make sure that observations are buffered. This means that for each rule of the form
in the original set of bridge rules we add
The operation just adds the atom to the context (we assume here that the language of the context contains constructs of this form). As mentioned above, this information is not used anywhere in the rest of the context’s KB, it just sits there for later use.
The only missing piece is a bridge rule bringing back information from the buffer when the context is no longer idle. This can be done using the bridge rule . Whenever the management function has to execute this operation, it takes all information out of the buffer, checks whether it is still within the relevant time window, and if this is the case adds it to the KB, handling potential inconsistencies the way discussed in Sect. 4.
The control context uses formulas of the form to express context is idle until time . We intend here to give a proof of concept, not a sophisticated control method. For this reason we simply assume the control context lets a detector context be idle for a specific constant time span whenever the detector does not suspect an emergency. This is achieved by the following bridge rule schemata:
Provided information of the form is kept until the actual time is , the last 2 conditions in the second rule schema guarantee that after being idle for period the context must check at least once whether some emergency is suspected. To avoid a context staying idle forever, we assume the management function deletes information of this form whenever is smaller than the current time minus 1. One more rule schema to make sure information about idle contexts is available in the form used by detector contexts:
7 Complexity
We want to analyze the complexity of queries on runs of rMCSs. For simplicity we do not consider parametrized bridge rules here, and assume that all knowledge bases in rMCSs are finite and all management functions can be evaluated in polynomial time.
Definition 9
The problem , respectively , is deciding whether for a given rMCS with sensors , a context of , a belief for , and a finite sequence of observations it holds that for some () for some run, respectively all runs, of induced by .
As the complexity of an rMCS depends on that of its individual contexts we introduce the notion of context complexity along the lines of Eiter et al. [10]. To do so, we need to focus on relevant parts of belief sets by means of projection. Intuitively, among all beliefs, we only need to consider belief that we want to query and beliefs that contribute to the application of bridge rules for deciding and . Given , , , and as in Definition 9, the set of relevant beliefs for a context of is given by . A projected belief state for and is a tuple where is a belief state for . The context complexity of in wrt. for a fixed observation is the complexity of deciding whether for a given projected belief state for and , there is some belief state for with and for all . The system’s context complexity is a (smallest) upper bound for the context complexity classes of its contexts. Our complexity results are summarized in Table 1.
Membership for
: a nondeterministic Turing machine can guess a projected belief state
for all observations in in polynomial time. Then, iteratively for each of the consecutive observations , first the context problem can be solved polynomially or using an oracle (the guess of and the oracle guess can be combined which explains that we stay on the same complexity level for higher context complexity). If the answer is ’yes’, is a projected equilibrium. We can check whether , compute the updated knowledge bases and continue the iteration until reaching the last observation. The argument is similar for the coproblem of . Hardness: holds by a reduction from deciding equilibrium existence for an MCS when is polynomial and by a reduction from the context complexity problem for the other results.Note that and are undecidable if we allow for infinite observations. The reason is that rMCSs are expressive enough (even with very simple context logics) to simulate a Turing machine such that deciding or for infinite runs solves the halting problem.
8 Discussion
In this paper we introduced reactive MCSs, an extension of managed MCSs for online reasoning, and showed how they allow us to handle typical problems arising in this area. Although we restricted our discussion to deterministic management functions, two sources of nondeterminism can be spotted by the attentive reader. On the one hand, we allow for semantics that return multiple belief sets for the same knowledge base, and, on the other hand, nondeterminism can be introduced through bridge rules.
The simplest example is guessing via positive support cycles, e.g., using bridge rules like that allow (under the standard interpretation of ) for belief sets with and without formula . Multiple equilibria may lead to an exponential number of runs. In practice, nondeterminism will have to be restricted. A simple yet practical solution is to focus on a single run, disregarding alternative equilibria. Here, one might ask which is the best full equilibrium to proceed with. In this respect, it makes sense to differentiate between nondeterministic contexts and nondeterminism due to bridge rules. In the first case, it is reasonable to adopt the point of view of the answerset programming (ASP) paradigm, i.e., the knowledge bases of a context can be seen as an encoding of a problem such that the resulting belief sets correspond to the problem solutions. Hence, as every belief set is a solution it does not matter which one to choose. Thus, if the problem to be solved is an optimisation problem that has better and worse solutions, this could be handled by choosing a context formalism able to express preferences so that the semantics only returns sufficiently good solutions. For preferences between equilibria that depend on the belief sets of multiple contexts, one cannot rely on intracontext preference resolution. Here, we refer the reader to preference functions as proposed by Ellmauthaler [12]. One might also adopt language constructs for expressing preferences in ASP such as optimization statements [14] or weak constraints [9]. Essentially, these assign a quality measure to an equilibrium. With such additional quality measures at hand, the best equilibrium can be chosen for the run.
As to related work, there is quite some literature on MCSs by now, for an overview see [6]. Recently an interesting approach to belief change in MCSs has been proposed [18]. Other related work concerns stream reasoning in ASP [13] and in databases: a continuous version of SPARQL [3] exists, and logical considerations about continuous query languages [19] were investigated. Kowalski’s logicbased framework for computing [17]
is an approach which utilizes first order logic and concepts of the situation and eventcalculus in response to observations. Updates on knowledgebases, based upon the outcome of a given semantics where also facilitated for other formalisms, like logic programming in general. There the iterative approaches of
epi [11] and evolp [1] are the most prominent. Note that none of these related approaches combines a solution to both knowledge integration and online reasoning, as we do.The idea of updates to the knowledgebase was also formalised for database systems [2].
For a related alternative approach using an operator for directly manipulating KBs without contributing to the current equilibrium, we refer to the work by Gonçalves, Knorr, and Leite [16].
References

[1]
José Júlio Alferes, Antonio Brogi, João Alexandre Leite, and
Luís Moniz Pereira, ‘Evolving logic programs’, in
8th European Conference on Logics in Artificial Intelligence (JELIA 2002)
, eds., Sergio Flesca, Sergio Greco, Nicola Leone, and Giovambattista Ianni, volume 2424 of Lecture Notes in Computer Science, pp. 50–61. Springer, (September 2002).  [2] Chitta Baral, Jorge Lobo, and Goce Trajcevski, ‘Formal characterizations of active databases: Part ii’, in 5th International Conference on Deductive and ObjectOriented Databases (DOOD 1997), eds., François Bry, Raghu Ramakrishnan, and Kotagiri Ramamohanarao, volume 1341 of Lecture Notes in Computer Science, pp. 247–264. Springer, (1997).
 [3] D. F. Barbieri, D. Braga, S. Ceri, E. D. Valle, and M. Grossniklaus, ‘CSPARQL: a continuous query language for RDF data streams’, International Journalof Semantic Computing, 4(1), 3–25, (2010).
 [4] G. Brewka, ‘Towards reactive multicontext systems’, in 12th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2013), pp. 1–10, (2013).
 [5] G. Brewka and T. Eiter, ‘Equilibria in heterogeneous nonmonotonic multicontext systems’, in AAAI’07, pp. 385–390, (2007).
 [6] G. Brewka, T. Eiter, and M. Fink, ‘Nonmonotonic multicontext systems: A flexible approach for integrating heterogeneous knowledge sources’, in Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning, 233–258, Springer, (2011).
 [7] G. Brewka, T. Eiter, M. Fink, and A. Weinzierl, ‘Managed multicontext systems’, in IJCAI’11, pp. 786–791, (2011).
 [8] Gerhard Brewka, Stefan Ellmauthaler, and Jörg Pührer, ‘Multicontext systems for reactive reasoning in dynamic environments’, in 21st European Conference on Artificial Intelligence (ECAI 2014), (2014). To appear.
 [9] F. Buccafurri, N. Leone, and P. Rullo, ‘Strong and weak constraints in disjunctive datalog.’, in 4th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 1997), pp. 2–17, (1997).
 [10] T. Eiter, M. Fink, P. Schüller, and A. Weinzierl, ‘Finding explanations of inconsistency in multicontext systems’, in Proc. KR’10, (2010).
 [11] Thomas Eiter, Michael Fink, Guiliana Sabbatini, and Hans Tompits, ‘A framework for declarative update specifications in logic programs’, in Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI 2001), ed., Bernhard Nebel, pp. 649–654. Morgan Kaufmann, (2001).
 [12] S. Ellmauthaler, ‘Generalizing multicontext systems for reactive stream reasoning applications’, in Proceedings of the 2013 Imperial College Computing Student Workshop (ICCSW 2013), pp. 17–24, (2013).
 [13] M. Gebser, T. Grote, R. Kaminski, P. Obermeier, O. Sabuncu, and T. Schaub, ‘Stream reasoning with answer set programming: Preliminary report’, in Proceedings of the 13th International Conference on the Principles of Knowledge Representation and Reasoning (KR 2012), (2012).
 [14] M. Gebser, R. Kaminski, B. Kaufmann, M. Ostrowski, T. Schaub, and S. Thiele, A user’s guide to gringo, clasp, clingo, and iclingo, Potassco Team, 2010.
 [15] F. Giunchiglia and L. Serafini, ‘Multilanguage hierarchical logics or: How we can do without modal logics’, Artif. Intell., 65(1), 29–70, (1994).
 [16] R. Gonçalves, M. Knorr, and J. Leite, ‘Evolving multicontext systems’, in 21st European Conference on Artificial Intelligence (ECAI 2014), (2014). To appear.
 [17] R. A. Kowalski and F. Sadri, ‘Towards a logicbased unifying framework for computing’, CoRR, abs/1301.6905, (2013).
 [18] Y. Wang, Z. Zhuang, and K. Wang, ‘Belief change in nonmonotonic multicontext systems’, in 12th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2013), pp. 543–555, (2013).
 [19] C. Zaniolo, ‘Logical foundations of continuous query languages for data streams’, in 2nd International Workshop on Datalog in Academia and Industry (Datalog 2.0), pp. 177–189, (2012).