1 Introduction
Time is a particularly important aspect of cooperative environments. In many “reallife” computer applications, the activities have a temporal duration (that can be even interrupted) and the coordination of such activities has to take into consideration this timeliness property. The interacting actors are mutually influenced by their actions, meaning that reacts accordingly to the timing and quantitative aspects related to ’s behavior, and vice versa. In fact, these interactions can be often related to quantities to be measured or minimized/maximized, in order to take actions depending from these scores: consider, for example, some generic communicating agents that need to take decisions on a (monetary) cost or a (fuzzy) preference for a shared resource. They both need to coordinate through timedependent and preferencebased decisions.
A practical example of such agents corresponds, for example, to software agents that need to negotiate some servicelevel agreement on a resource, or a service, with timerelated sideconditions. For instance, a fitting example is given by auction schemes, where the seller/bidder agents need to agree on a preference for a given prize (e.g., a monetary cost). At the same time, the agents have to respect some timeout and alarm events, respectively representing the absence and the presence of bids for the prize (for instance). The language we present in this paper is well suited for this kind of interactions, as Section 5 shows with examples.
The Timed Concurrent Constraint Programming (tccp), a timed extension of the pure formalism of Concurrent Constraint Programming (ccp) [Saraswat (1989)], has been introduced in [de Boer et al. (2000)]. The language is based on the hypothesis of bounded asynchrony [Saraswat et al. (1996)]: computation takes a bounded period of time rather than being instantaneous as in the concurrent synchronous languages ESTEREL [Berry and Gonthier (1992)], LUSTRE [Halbwachs et al. (1991)], SIGNAL [le Guernic et al. (1991)] and Statecharts [Harel (1987)]. Time itself is measured by a discrete global clock, i.e., the internal clock of the tccp process. In [de Boer et al. (2000)] the authors also introduced timed reactive sequences, which describe the reaction of a tccp
process to the input of the external environment, at each moment in time. Formally, such a reaction is a pair of constraints
, where is the input and is the constraint produced by the process in response to .Soft constraints [Bistarelli (2004), Bistarelli et al. (1997)] extend classical constraints to represent multiple consistency levels, and thus provide a way to express preferences, fuzziness, and uncertainty. The ccp framework has been extended to work with soft constraints [Bistarelli et al. (2006)], and the resulting framework is named Soft Concurrent Constraint Programming (sccp). With respect to ccp, in sccp the tell and ask agents are equipped with a preference (or consistency) threshold, which is used to determine their success, failure, or suspension, as well as to prune the search; these preferences should preferably be satisfied but not necessarily (i.e. overconstrained problems). We adopt soft constraints instead of crisp ones, since classic constraints show evident limitations when trying to represent reallife scenarios, where the knowledge is not completely available nor crisp.
In this paper, we introduce a timed and soft extension of ccp that we call Timed Soft Concurrent Constraint Programming (tsccp), inheriting from both tccp and sccp at the same time. In tsccp, we directly introduce a timed interpretation of the usual programming constructs of sccp, by identifying a timeunit with the time needed for the execution of a basic sccp action (ask and tell), and by interpreting action prefixing as the nexttime operator. An explicit timing primitive is also introduced in order to allow for the specification of timeouts. In the first place, the parallel operator of tsccp is first interpreted in terms of maximal parallelism, as in [de Boer et al. (2000)]. Secondly, we also consider a different paradigm, where the parallel operator is interpreted in terms of interleaving, however assuming maximal parallelism for actions depending on time. In other words, time passes for all the parallel processes involved in a computation. This approach, analogous to that one adopted in [de Boer et al. (2004)], is different from that one of [de Boer et al. (2000), Bistarelli et al. (2008)] (where maximal parallelism was assumed for any kind of action), and it is also different from the one considered in [Busi et al. (2000)], where time does not elapse for timeout constructs. This can be accomplished by allowing all the timeonly dependent actions (transitions) to concurrently run with at most one action manipulating the store (a transition).
The paper extends the results in [Bistarelli et al. (2008)] by providing new semantics that allows maximal parallelism for time elapsing and an interleaving model for basic computation steps (see Section 7). This new language is called tsccp with interleaving, i.e., tsccpi, to distinguish it from the version allowing maximal parallelism of all actions. According to the maximal parallelism policy (applied, for example, in the original works as [Saraswat (1989)] and [Saraswat et al. (1994)]), at each moment every enabled agent of the system is activated, while in the interleaving paradigm only one of the enabled agents is executed instead. This second paradigm is more realistic if we consider limited resources, since it does not imply the existence of an unbounded number of processors. However, in [de Boer et al. (2000)] it is shown that the notion of maximal parallelism of tsccp is more expressive than the notion of interleaving parallelism of other concurrent constraint languages. The presence of maximal parallelism can force the computation to discard some (nonenabled) branches which could became enabled later on (because of the information produced by parallel agents), while this is not possible when considering an interleaving model. Therefore, tsccp is sensitive to delays in adding constraints to the store, whereas this is not the case for ccp and tsccpi.
The rest of the paper is organized as follows: in Section 2 we summarize the most important background notions and frameworks from which tsccp derives, i.e. tccp and sccp. In Section 3 we present the tsccp language, and in Section 4 describes the operational semantics of tscc agents. Section 5 better explains the programming idioms as timeout and interrupt, exemplifies the use of timed paradigms in the tscc language and shows an application example on modeling an auction interaction among several bidders and a single auctioneer. Section 6 describes the denotational semantics for tsccp, and proves the denotational model correctness with the aid of connected reactive sequences. Section 7 explains the semantics for interleaving with maximal parallelism of timeelapsing actions (i.e. the tsccpi language), while Section 8 describes a timeline for the execution of three parallel agents in tsccpi. Section 9 describes the denotational semantics of tsccpi and proves the correctness of the denotational model. Section 10 reports the related work and, at last, Section 11 concludes by also indicating future research.
2 Background
2.1 Soft Constraints
A soft constraint [Bistarelli et al. (1997), Bistarelli (2004)] may be seen as a constraint where each instantiation of its variables has an associated value from a partially ordered set which can be interpreted as a set of preference values. Combining constraints will then have to take into account such additional values, and thus the formalism has also to provide suitable operations for combination () and comparison () of tuples of values and constraints. This is why this formalization is based on the concept of csemiring [Bistarelli et al. (1997), Bistarelli (2004)], called just semiring in the rest of the paper.
Semirings.
A semiring is a tuple such that: i) is a set and ; ii) is commutative, associative and is its unit element; iii) is associative, distributes over , is its unit element and is its absorbing element. A csemiring is a semiring such that: is idempotent, is its absorbing element and is commutative. Let us consider the relation over such that iff . Then, it is possible to prove that (see [Bistarelli et al. (1997)]): i) is a partial order; ii) and are monotone on ; iii) is its minimum and its maximum; iv) is a complete lattice (a complete lattice is a partially ordered set in which all subsets have both a supremum and an infimum) and, for all , (where is the least upper bound).
Moreover, if is idempotent, then: distributes over ; is a complete distributive lattice and its (greatest lower bound). Informally, the relation gives us a way to compare semiring values and constraints. In fact, when we have , we will say that is better than . In the following, when the semiring will be clear from the context, will be often indicated by .
Constraint System.
Given a semiring and an ordered set of variables over a finite domain , a soft constraint is a function which, given an assignment of the variables, returns a value of the semiring. Using this notation is the set of all possible constraints that can be built starting from , and .
Any function in involves all the variables in , but we impose that it depends on the assignment of only a finite subset of them. So, for instance, a binary constraint over variables and , is a function , but it depends only on the assignment of variables (the support of the constraint, or scope). Note that means where is modified with the assignment (that is the operator has precedence over application). Note also that is the application of a constraint function to a function ; what we obtain, is a semiring value .
The partial order over can be easily extended among constraints by defining , for each possible .
Combining and projecting soft constraints.
Given the set , the combination function is defined as (see also [Bistarelli et al. (1997), Bistarelli (2004), Bistarelli et al. (2006)]). Informally, performing the between two constraints means building a new constraint whose support involves all the variables of the original ones, and which associates with each tuple of domain values for such variables a semiring element which is obtained by multiplying the elements associated by the original constraints to the appropriate subtuples.
Given a constraint and a variable , the projection [Bistarelli et al. (1997), Bistarelli (2004), Bistarelli et al. (2006)] of over , written is the constraint s.t. . Informally, projecting means eliminating some variables from the support. This is done by associating with each tuple over the remaining variables a semiring element which is the sum of the elements associated by the original constraint to all the extensions of this tuple over the eliminated variables.
We define also a function [Bistarelli (2004), Bistarelli et al. (2006)] as the function that returns the semiring value for all assignments , that is, . We will usually write simply as . An example of constants that will be useful later are and that represent respectively the constraints associating and to all the assignment of domain values.
Solutions.
A SCSP [Bistarelli (2004)] is defined as , where is the set of constraints defined over variables in (each with domain ), and whose preference is determined by semiring . The best level of consistency notion is defined as , where [Bistarelli (2004)]. A problem is consistent if [Bistarelli (2004)]. is instead simply “consistent” iff there exists such that is consistent. is inconsistent if it is not consistent.
Example 1
Figure 1 shows a weighted SCSP as a graph: the weighted semiring is used, i.e. ( is the arithmetic plus operation). Variables and constraints are represented respectively by nodes and arcs (unary for , and binary for ); . The solution of the CSP in Figure 1 associates a semiring element to every domain value of variables and by combining all the constraints together, i.e. . For instance, for the tuple (that is, ), we have to compute the sum of (which is the value assigned to in constraint ), ( in ) and ( in ): the value for this tuple is . The solution is a consistent solution, where corresponds to the blevel of , i.e., .
2.2 Concurrent Constraint Programming over Soft Constraints
The basic idea underlying ccp [Saraswat (1989)] is that computation progresses via monotonic accumulation of information in a constraint global store. Information is produced by the concurrent and asynchronous activity of several agents which can add (tell) a constraint to the store. Dually, agents can also check (ask) whether a constraint is entailed by the store, thus allowing synchronization among different agents. The ccp languages are defined parametrically w.r.t. a given constraint system. The notion of constraint system has been formalized in [Saraswat and Rinard (1990)] following Scott’s treatment of information systems. Soft constraints over a semiring and an ordered set of variables (over a domain ) have been showed to form a constraint system “à la Saraswat”, thus leading to the definition of Soft Concurrent Constraint Programmingg (sccp) [Bistarelli et al. (1997), Bistarelli (2004), Bistarelli et al. (2006)].
Consider the set and the partial order . Then an entailment relation is defined s.t. for each and , we have (see also [Bistarelli (2004), Bistarelli et al. (2006)]). Note that in this setting the notion of token (constraint) and of set of tokens (set of constraints) closed under entailment is used indifferently. In fact, given a set of constraint functions , its closure w.r.t. entailment is a set that contains all the constraints greater than . This set is univocally representable by the constraint function . The definition of the entailment operator on top of , and of the relation, lead to the notion of soft constraint system. It is also important to notice that in [Saraswat (1989)] it is claimed that a constraint system is a complete algebraic lattice. In the sccp framework, algebraicity is not required [Bistarelli et al. (2006)] instead, since the algebraic nature of the structure strictly depends on the properties of the semiring^{1}^{1}1Notice that we do not aim at computing the closure of the entailment relation, but only to use the entailment relation to establish if a constraint is entailed by the current store, and this can be established even if the lattice is not algebraic (that is even if the times operator is not idempotent)..
To treat the hiding operator of the language, a general notion of existential quantifier is introduced by using notions similar to those used in cylindric algebras. Consider a set of variables with domain and the corresponding soft constraint system . For each , the hiding function [Bistarelli (2004), Bistarelli et al. (2006)] is the function . To make the hiding operator computationally tractable, it is required that the number of domain elements in , having semiring values different from , is finite [Bistarelli et al. (2006)]. In this way, to compute the sum needed for , we can consider just a finite number of elements (those different from ), since is the unit element of the sum. Note that by using the hiding function we can represent the operator defined in Section 2.1. In fact, for any constraint and any variable , [Bistarelli et al. (2006)].
To model parameter passing also diagonal elements have to be defined. Consider a set of variables and the corresponding soft constraint system. Then, for each , a diagonal constraint is defined as s.t., if , and if [Bistarelli (2004), Bistarelli et al. (2006)].
[cylindric constraint system [Bistarelli et al. (2006)]] Consider a semiring , a domain of the variables , an ordered set of variables , and the corresponding structure . Then, , is a cylindric constraint system.
2.3 Timed Concurrent Constraint Programming
A timed extension of ccp, called tccp has been introduced in [de Boer et al. (2000)]. Similarly to other existing timed extensions of ccp defined in [Saraswat et al. (1996)], tccp is a language for reactive programming designed around the hypothesis of bounded asynchrony (as introduced in [Saraswat et al. (1996)]: computation takes a bounded period of time rather than being instantaneous).
When querying the store for some information that is not present (yet), a ccp agent will simply suspend until the required information has arrived. In timed applications however often one cannot wait indefinitely for an event. Consider for example the case of a connection to a web service providing some online banking facility. In case the connection cannot be established, after a reasonable amount of time an appropriate timeout message has to be communicated to the user. A timed language should then allow us to specify that, in case a given time bound is exceeded (i.e. a timeout occurs), the wait is interrupted and an alternative action is taken. Moreover, in some cases it is also necessary to have a preemption mechanism which allows one to abort an active process and to start a process when a specific (abnormal) event occurs.
In order to be able to specify these timing constraints tccp introduces a discrete global clock and assumes that ask and tell actions take one timeunit. Computation evolves in steps of one timeunit, so called clockcycles. Action prefixing is the syntactic marker which distinguishes a time instant from the next one and it is assumed that parallel processes are executed on different processors, which implies that, at each moment, every enabled agent of the system is activated. This assumption gives rise to what is called maximal parallelism. The time in between two successive moments of the global clock intuitively corresponds to the response time of the underlying constraint system. Thus all parallel agents are synchronized by the response time of the underlying constraint system. Since the store is monotonically increasing and one can have dynamic process creation, clearly the previous assumptions imply that the constraint solver takes a constant time (no matter how big the store is), and that there is an unbounded number of processors. However, one can impose suitable restriction on programs, thus ensuring that the (significant part of the) store and the number of processes do not exceed a fixed bound; these restrictions would still allow significant forms of recursion with parameters.
Furthermore, a timing construct of the form is introduced in tccp, whose semantics is the following: if the constraint is entailed by the store at the current time , then the above agent behaves as at time , otherwise it behaves as at time . This basic construct allows to derive such timing mechanisms as timeout and preemption [de Boer et al. (2000), Saraswat et al. (1996)]. The instantaneous reaction can be obtained by evaluating now in parallel with and , within the same timeunit. At the end of this timeunit, the store will be updated by using either the constraint produced by , or that one produced by , depending on the result of the evaluation of now. Clearly, since and could contain nested agents, a limit for the number of these nested agents should be fixed. Note that, for recursive programs, such a limit is ensured by the presence of the procedurecall, since we assume that the evaluation of such calls takes one timeunit.
3 Timed Soft Concurrent Constraint Programming
In this section we present the tsccp language, which originates from both tccp and sccp. To obtain this aim, we extend the syntax of the cc language with the timing construct now then else (inherited from tccp), and also in order to directly handle the cut level as in sccp. This means that the syntax and semantics of the tell, ask and nowagents have to be enriched with a threshold that is used to check when the agents may succeed, or suspend.
Definition 1 (tsccp Language)
Given a soft constraint system , the corresponding structure , any semiring value , soft constraints and any tuple of variables , the syntax of the tsccp language is given by the following grammar:
where, as usual, is the class of processes, is the class of sequences of procedure declarations (or clauses), is the class of agents. In a tsccp process , is the initial agent, to be executed in the context of the set of declarations . The agent success represents a successful termination, so it may not make any further transition.
In the following, given an agent , we denote by the set of the free variables of (namely, the variables which do not appear in the scope of the quantifier). Besides the use of soft constraints (see Section 2.2) instead of crisp ones, there are two fundamental differences between tsccp and ccp. The first main difference w.r.t. the original cc syntax is the presence of a semiring element and of a constraint to be checked whenever an ask or tell operation is performed. More precisely, the level (respectively, ) will be used as a cut level to prune computations that are not good enough. The second main difference with respect to ccp (but, this time, also with respect to sccp) is instead the presence of the now then else construct introduced in Section 2.3. Even for this construct, the level (or ) is used as a cut level to prune computations.
Action prefixing is denoted by , nondeterminism is introduced via the guarded choice construct , parallel composition is denoted by , and a notion of locality is introduced by the agent , which behaves like with considered local to , thus hiding the information on provided by the external environment.
In the next subsection we formally describe the operational semantics of tsccp. In order to simplify the notation, in the following we will usually write a tsccp process simply as the corresponding agent .
4 An Operational Semantics for tsccp Agents
The operational model of tscc agents can be formally described by a transition system where we assume that each transition step takes exactly one timeunit. Configurations in Conf are pairs consisting of a process and of a constraint in , representing the common store shared by all the agents. The transition relation is the least relation satisfying the rules R1R17 in Figure 2, and it characterizes the (temporal) evolution of the system. So, means that, if at time we have the process and the store , then at time we have the process and the store .
R1  Vtell  

R2  Tell  
R3  Vask  
R4  Ask  
R5 
Parall1  
R6  Parall2  
R7 
Nondet  
R8  Vnow1  
R9  Vnow2  
R10  Vnow3  
R11  Vnow4  
R12  Now1  
R13  Now2  
R14  Now3  
R15  Now4  
R16  Hide  
R17  Pcall  
Let us now briefly discuss the rules in Figure 2. Here is a brief description of the transition rules:
 Valuedtell.

The valuedtell rule checks for the consistency of the Soft Constraint Satisfaction Problem [Bistarelli (2004)] (SCSP) defined by the store . A SCSP is consistent if , where , i.e., the best level of consistency of the problem is a semiring value representing the least upper bound among the values yielded by the solutions. Rule can be applied only if the store is consistent with ^{2}^{2}2Notice that we use instead of because we can possibly deal with partial orders. The same holds also for instead of .. In this case the agent evolves to the new agent over the store . Note that different choices of the cut level could possibly lead to different computations. Finally, note that the updated store will be visible only starting from the next time instant, since each transition step involves exactly one timeunit.
 Tell.

The tell action is a finer check of the store. In this case (see rule R2), a pointwise comparison between the store and the constraint is performed. The idea is to perform an overall check of the store, and to continue the computation only if there is the possibility to compute a solution not worse than . Note that this notion of tell could be also applied to the classical cc framework: the tell operation would succeed when the set of tuples satisfying constraint is not a superset of the set of tuples allowed by .^{3}^{3}3Notice that the operator in the crisp case reduces to set intersection. As for the valued tell, the updated store will be visible only since the next time instant. In the following, let us use and as a shorthand for and , respectively.
 Valuedask.

The semantics of the valuedask is extended in a way similar to what we have done for the valuedtell action. This means that, to apply the rule R3, we need to check if the store entails the constraint , and also if is “consistent enough” w.r.t. the threshold set by the programmer.
 Ask.

In rule R4, we check if the store entails the constraint , but, similarly to rule R2, we also compare a finer (pointwise) threshold to the store . As for the tell action, let us use as a shorthand for .
 Parallelism.

Rules R5 and R6 model the parallel composition operator in terms of maximal parallelism: the agent executes in one timeunit all the initial enabled actions of and . Considering rule R5 (where maximal parallelism is accomplished in practice), notice that the ordering of the operands in is not relevant, since is commutative and associative. Moreover, for the same two properties, if and , we have that . Therefore the resulting store is independent from the choice of the constraint such that and (analogously for ).
 Nondeterminism.

According to rule , the guarded choice operator gives rise to global nondeterminism: the external environment can affect the choice, since is enabled at time (and is started at time ) if and only if the store entails (and if it is compatible with the threshold too), and can be modified by other agents.
 Valuednow and Now.

Rules  show that the agent behaves as or depending on the fact that is or is not entailed by the store, provided that the current store is compatible with the threshold. Differently from the case of the ask, here the evaluation of the guard is instantaneous: if current store is compatible with the threshold , () can make a transition at time and is (is not) entailed by the store , then the agent can make the same transition at time . Moreover, observe that in any case the control is passed either to (if is entailed by the current store and is compatible with the threshold) or to (in case does not entail and is compatible with the threshold). Analogously for the notvalued version, i.e., (see rules ). Finally, we use as a shorthand for the agent
 Hiding variables.

The agent behaves like , with considered local to , as show by rule R16. This is obtained by substituting the variable for a variable , which we assume to be new and not used by any other process. Standard renaming techniques can be used to ensure this; in rule R16, denotes the process obtained from by replacing the variable for the variable .
 Procedurecalls.

Rule treats the case of a procedurecall when the actual parameter equals the formal parameter. We do not need more rules since, for the sake of simplicity, here and in the following we assume that the set F of procedure declarations is closed w.r.t. parameter names: that is, for every procedurecall appearing in a process F.A, we assume that, if the original declaration for p in F is , then F contains also the declaration .^{4}^{4}4Here the (original) formal parameter is identified as a local alias of the actual parameter. Alternatively, we could have introduced a new rule treating explicitly this case, as it was in the original ccp papers. Moreover, we assume that if , then .
Using the transition system described by (the rules in)
Figure 2, we can now define our notion of observables, which
considers the results
of successful terminating computations that the agent can
perform for each tsccp process .
Here and in the following, given a transition relation , we denote by its reflexive and transitive closure.
Definition 2 (Observables)
Let be a tsccp process. We define
where is any agent which contains only occurrences of the agent and of the operator .
5 Programming Idioms and Examples
We can consider the primitives in Definition 1 to derive the soft version of the programming idioms in [de Boer et al. (2000)], which are typical of reactive programming.

The delay constructs or are used to delay the execution of agent after the execution of or ; is the number of the timeunits of delay. Therefore, in addiction to a constraint , in tsccp the transition arrow can have also a number of delay slots. This idiom can be defined by induction: the base case is , and the inductive step is . The valued version can be defined in an analogous way.

The timed guarded choice agent waits at most timeunits () for the satisfaction of one of the guards; notice that all the ask actions have a soft transition arrow, i.e. is either of the form or , as in Figure 2. Before this timeout, the process behaves just like the guarded choice: as soon as there exist enabled guards, one of them (and the corresponding branch) is nondeterministically selected. After waiting for timeunits, if no guard is enabled, the timed choice agent behaves as . Timeout constructs can be assembled through the composition of several primitives (or their valued version), as explained in [de Boer et al. (2000)] for the (crisp) tccp language.
The timeout can be defined inductively as follows: let us denote by the agent . In the base case, that is , we define as the agent:
where for , either if is of the form or if is of the form . Because of the operational semantics explained in rules R8R11 (see Figure 2), if a guard is true, then the agent is evaluated in the same time slot. Otherwise, if no guard is true, the agent is evaluated in the next time slot. Then, by inductively reasoning on the number of timeunits , we can define as

Watchdogs are used to interrupt the activity of a process on a signal from a specific event. The idiom behaves as , as long as is not entailed by the store and the current store is compatible with the threshold; when is entailed and the current store is compatible with the threshold, the process is immediately aborted.
The reaction is instantaneous, in the sense that is aborted at the same time instant of the detection of the entailment of . However, according to the computational model, if is detected at time , then has to be produced at time with . Thus, we have a form of weak preemption.
As well as timeouts, also watchdog agents can be defined in terms of the other basic constructs of the language (see Figure 3).
In the following we assume that there exists an (injective) renaming function which, given a procedure name , returns a new name that is not used elsewhere in the program. Moreover, let us use as a shorthand for else, where we assume that, for any procedure declared as , a declaration is added, where denotes the agent obtained from by replacing in it each occurrence of any procedure by . The assumption in the case of the agent is needed for correctness. In practical cases, it can be satisfied by suitably renaming the variables associated to signals. In the following is either of the form or . Analogously for .
, assuming Figure 3: Examples of watchdog constructs. The translation in Figure 3 can be easily extended to the case of the agent , which behaves as the previous watchdog and also activates the process when is aborted (i.e., when is entailed and the current state is compatible with the threshold). In the following we will then use also this form of watchdog.
The assumption on the instantaneous evaluation of is essential in order to obtain a preemption mechanism which can be expressed in terms of the primitive. In fact, if the evaluation of took one timeunit, then this unit delay would change the compositional behavior of the agent controlled by the watchdog. Consider, for example, the agent , which takes two timeunits to complete its computation. The agent (resulting from the translation of ) compositionally behaves as , unless a signal is detected and the current state is compatible with the threshold, in which case the evaluation of is interrupted. On the other hand, if the evaluation of took one timeunit, then would take four timeunits and would not behave anymore as when is not present. In fact, in this case, the agent would produce while would not, where is the agent .
The valued version of watchdogs can be defined in an analogous way.
With this small set of idioms, we have now enough expressiveness to describe complex interactions. For the following examples on the new programming idioms, we consider the Weighted semiring [Bistarelli (2004), Bistarelli et al. (1997)] and the (weighted) soft constraints in Figure 4. We first provide simple program examples in order to explain as more details as possible on how a computation of tsccp agents proceeds. In Section 5.1 we show a more complex example describing the classical actions during a negotiation process; the aim of that example is instead to show the expressivity of the tsccp language, without analyzing its execution in detail.
Example 2 (Delay)
As a first very simple example, suppose to have two agents of the form: and ; their concurrent evaluation in the empty store is:
The timeline for this parallel execution is described in Figure 5. For the evaluation of and we respectively consider the rules and in Figure 2, since both transitions are valued. However, both these two actions are delayed: three timeunits for the of (including the first ), and two timeunits for the of (including the first ). As explained before, this can be obtained by adding to the store with a action respectively three, and two times. Therefore, the parallel agent corresponds to:
This agent is interpreted by using  in Figure 2 in terms of maximal parallelism, i.e., all the actions are executed in parallel. The first two of and can be simultaneously executed by using rule : the precondition of the rule is then satisfied. The store does not change since . At this point, the action of is not enabled because , that is the precondition of is not satisfied. Therefore, the processor can only be allocated to and, since is true (i.e. the precondition of is satisfied), at the computation is in the state:
Now the can be executed because : therefore, the store becomes equal to :
At (see Figure 5) we can successfully terminate the program: in the store the is finally enabled at , according to the two preconditions of rule , i.e., and : therefore we have
Example 3 (Timeout)
In this second example we evaluate a timeout construct. Suppose we have two agents and of the form:
and
The description of agent is a shortcut for the following agent, as previously explained in the definition of the timeout:
where . Their concurrent evaluation in the empty store is:
The timeline for this parallel execution is given in Figure 6. At the store is empty (i.e., ), thus both constraints and asked by the nondeterministic choice agent are not entailed. In , the of , which would entail both and , is delayed by three timeunits: in the first three timeunits, is executed according to the delay construct, as shown in Example 2. At the timeout is triggered in , since, according to , and (see Figure 2), the time elapsing in the timeout construct can be executed together with the delay actions of . After the timeout triggering, agent is however blocked, since is not entailed by the current empty store, and the precondition of the (rule ) is not satisfied. can execute the last delay, and then perform the operation at ; the store becomes . This finally unblocks at , since, according to the precondition of rule , (i.e., ). Finally, at we have
Comments
There are no comments yet.