Towards Deriving Verification Properties

03/11/2019 ∙ by Michael Winikoff, et al. ∙ 0

Formal software verification uses mathematical techniques to establish that software has certain properties. For example, that the behaviour of a software system satisfies certain logically-specified properties. Formal methods have a long history, but a recurring assumption is that the properties to be verified are known, or provided as part of the requirements elicitation process. This working note considers the question: where do the verification properties come from? It proposes a process for systematically identifying verification properties.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There are two key problems that occur in all such approaches: (1) what properties do we verify

; and (2) where do the probabilities come from.

The first of these remains a problem with all formal analysis techniques and, clearly, significant work must be done in capturing the requirements of the system in a formal and logical way.” [5] (emphasis added)

Formal software verification uses mathematical techniques to establish that software has certain properties. For example, that the behaviour of a software system satisfies certain logically-specified properties. A common approach is model checking [2] which takes a model of a system ()) and a property (typically specified in some temporal logic), and can return either a guarantee that the behaviour of satisfies , or a counter-example: a possible behaviour of that violates .

Formal methods have a long history [4, 3, 1] but, as suggested by the opening quote, a recurring assumption is that the property is known, or provided as part of the requirements elicitation process.

This working note considers the question: where does come from?

Our answer to this question is a pragmatic one: we define a process that can be used to systematically identify a set of verification properties.

The high-level idea is that we start with a well-justified (and universal?) collection of generic high-level properties (“tenets”) such as “do no harm”. We then systematically (but informally) derive contextualised more specific properties, using domain knowledge and elements of the system’s design. These more specific properties capture the ways in which the specific system can violate the desired high-level properties.

For example, imagine a robot that assists an elderly person living in their home. One high-level property (“tenet”) might be that the person should be kept safe (i.e. safe from harm, and healthy). This high-level property clearly cannot be specified formally. However, what we can do is carefully (and systematically) consider all the ways in which the person could come to harm, given the context of the system and its functionality, along with domain knowledge. This might allow us to (informally) derive specific properties, such as that the person is accompanied if they leave the house, and that they are reminded to take medication.

We now define the high-level process in a little more detail. We assume (see Figure 1) that a well-defined design process (A) is followed, which results in some design models (B), and that these are then refined and implemented (C), yielding code in some appropriate programming language, such as a BDI (Belief-Desire-Intention) agent-oriented programming language (D).

The process that we follow then takes as input high-level generic properties (“tenets”) (E), as well as domain knowledge (F), and information about the design (B) and possibly even the implementation (D), and uses the domain and design information to contextualise and refine (G) the high-level properties into more specific properties (H). These specific properties and the software can be model checked. Note that this process proceeds from informal to formal.

Figure 1: High-level process

For the remainder of this document we make the following assumptions about the specific forms of these different artefacts.

For the design model (B) we remain as agnostic as possible, and only assume some form of goal model where goals are related to their sub-goals. This sort of model is common to many AOSE methodologies [11].

For the implementation (D) we actually do not need to make any assumptions, as long as (B) has a goal model. If it does not, then, if the implementation uses a notion of goals, then we can extract a goal-tree from the implementation, shown as a dashed line in Figure 1.

For the tenets (E), we assume simply English text. We do not believe that it is possible to effectively formalise high-level notions such as “does no harm”, in fact, the whole point of the process we propose, is to allow such properties to be captured formally by making them more specific in the context of the system at hand.

Domain knowledge (F) could be represented in a number of ways. We assume (following [10]) that it is represented as a collection of implications. However, we could also explore using a graphical model instead of logical formulae. Note that domain knowledge is also refined and extended as part of the process (G), indicated by the bi-directional arrow between (G) and (F).

Properties to be verified (H) are logical formulae. A wide range of logics could be used, but for the present paper we simply assume Linear Temporal Logic (LTL). Other richer logics could be used, but there is the standard trade-off that the richer the logic, the harder verification becomes.

The remainder of this document provides a sketch for a possible process, illustrated using a running example. This process is an early sketch, and is intended as a starting point for discussion and refinement.

Before proceeding further, it is worth briefly mentioning a key paper that informs this work.

The process of deriving ways in which a tenet could be compromised has similarities with a 2000 paper by van Lamsweerde and Letier (henceforth “vLL”) [10]. Very briefly, they begin with leaf goals (i.e. requirements, which are formalised in logic), and derive obstacles to these requirements, using domain knowledge.

Although the starting point is different, a formally specified system goal in their approach vs. an informal generic tenet in ours, the general idea of deriving a more detailed obstacle for the starting point, by using domain knowledge, is similar. Note that sometimes the derivation of obstacles in their method uses not domain knowledge, but a general pattern that a goal can go wrong by not being done at all, or by being done wrongly, where “done wrongly” can refer to any aspect of the goal. So, for example, if the goal is to send a message, then it could go wrong by the message not being sent, or it could be sent to the wrong recipient, or sent with incorrect contents.

One difference, is that whereas they start with a specific goal, we start with a high-level tenet. This is because the tenets are what we care about and want to ensure, whereas not all goals are important. For example, in a home-care robot scenario, checking that the fridge is not left open is not that important, whereas ensuring that the person being cared for is reminded to take their medication is very important.

2 Artefacts

This section briefly describes the artefacts that play a role in the process. As depicted in Figure 1, the process (G) involves tenets (E), a design model, specifically a goal tree (B), and domain knowledge (F). It results in formally specified properties to be verified (H).

We have already noted that the tenets (E) are simply high-level and generic statements in English, for example “do not harm humans”, or “ensure the system is able to continue to function”.

We have also already noted that the formulae to be verified (H) are specified in Linear Temporal Logic (LTL), but that other logics are also possible.

We now turn our attention to the goal tree (B) and domain knowledge (F). We also define an intermediate data structure: the refinement tree (see Section 2.3). Figure 2 shows the process, along with the key artefacts.

Figure 2: Zooming in on step G

2.1 Goal Tree

The design model that we use is a goal tree (see example in Figure 3). This is a tree111More precisely a directed acyclic graph, since a sub-goal can be reused, i.e. it can be the child of multiple parents. where nodes are (sub-)goals, and arrows link goals to their sub-goals. The relationship of a goal to its children is indicated as being either “AND” or “OR” (in the example in Figure 3 all the relationships are “AND”). Such trees can be constructed and refined by asking “Why?” to identify parents of goals, and asking “How?” to identify children of goals [9, 6]. Such trees are commonly used in methodologies for engineering autonomous systems [11].

The tree in Figure 3 relates to a Care-O-bot scenario [7]. This scenario involves a robot in the home of an elderly person. The robot reminds the person to eat, drink, and take their medication, is able to monitor them and alert medical authorities if required, and also performs a range of other support tasks (e.g. checking if the fridge door is left open, watching TV together, following orders to fetch items, answering the door).

Figure 3: Example Goal Tree

2.2 Domain Knowledge

The third artefact is domain knowledge which we model as implications in a suitable logic (for example, LTL, following vLL).

One example domain knowledge rule captures that issuing reminders (e.g. to take medication) tends to lead to the task in question being done. This is actually difficult to formalise precisely, since the reminders do not actually guarantee compliance. For simplicity we formalise the overly-strong property that reminding will ensure compliance:

(1)

Of course, this is too strong, and a more nuanced formalisation might involve a logic with probabilities, allowing one to capture that reminders reduce the probability of forgetting, while not eliminating it entirely.

Conversely, we may also know that reminders have value because the person being cared for is forgetful, and is more likely to remember to perform a task if they are reminded. Again, a correct formalisation would require probabilities, but for now we formalise using an overly strong property that reminders are necessary (i.e. without reminders, the person will forget):

(2)

(of course, we could simply have written the earlier property using )

Note: For convenience, we adopt the convention (used by vLL) that is notational shorthand for , which allows us to write the first domain knowledge above as simply .

Other examples of domain knowledge include defining high-level concepts, e.g.  that “not getting enough food” means having fewer than three meals a day (and obviously there need to be conditions on what those meals are), and that “not drinking enough” is less than around 1.2 litres per day. We use “” to define such equivalences, which logically is treated as bidirectional implication.

not enough food 3 meals a day (3)
not enough drink 1.2L/day (4)
accompany excursion follow or delegate-by-informing (5)
keep healthy (6)
1.2L/day (7)
correct medication issued prescribed (8)
3 meals a day (9)

Additionally, note that the goal tree can be seen as specifying implied domain knowledge, for instance if a goal has a child goal then this indicates that is part of achieving . However, in the detailed process presented below, we keep the goal-tree distinct, rather than mapping it to domain knowledge rules.

2.3 Refinement Tree

The artefacts discussed so far are the inputs to the process. The process uses, and potentially modifies, these to progressively generate a refinement tree. This is an intermediate artefact (hence shown dashed in Figure 2) that captures the process of refining the tenet. In practice, this tree is valuable for traceability. However, it is not required: we could equally well work with just a set of nodes (the frontier of the refinement tree).

The root of a refinement tree is a tenet, and, once the process is completed, its leaves are each an LTL property. Each node in the refinement tree is a description (formal or informal) of a set of behaviours. That is, the node specifies a subset of the possible behaviours of the system. The subset of behaviours is described in terms of a condition, which, since it can be instantaneous, or over a time period, is formalised (eventually) in LTL.

The relationship between a node and its children is implication: if node is a child of node , then that means that any behaviour that satisfies will also satisfy . So, for example, given a node “not enough food”, with a child node “3 meals a day”, this implication holds. Since having less than 3 meals a day is (according to domain knowledge Equation 3) the definition of not having enough food, any behaviour that meets the condition of not having 3 meals a day, by definition also meets the condition of not having enough food. Similarly, consider a node “harm” with two child nodes: “not kept healthy” and “put at risk”. A behaviour that satisfies one of these, by either not keeping the person healthy or by putting them at risk, also is considered to be meeting the definition of causing them harm222Note that in this example we are considering putting a person at risk of coming to harm, in a situation where the system cannot prevent the harm, as being equivalent to actually causing harm, even though the person might be lucky and not come to harm, despite being put at risk. For instance, a young child crossing a busy road may not come to harm, but we consider preventing this situation part of keeping them safe..

3 Process

The basic idea is that we start with a negated tenet. For example, if we want to “not harm a person”, then we begin with “harming a person” and then consider how this could occur in the current context (i.e. with respect to the domain knowledge and system at hand), this is done by asking questions such as:

  • “What goals contribute to/against this tenet?”

  • “How (in this context) could this tenet be violated?”

  • “What does this mean (in this context)?”

This process is iterated until all leaf nodes are formalised. The results (the leaf nodes) are negated, yielding a collection of properties to be checked. It is also possible to use obstacle derivation a la vLL to explore the assumptions underlying the achievement of these properties.

In essence, the process takes a negated tenet, and derives a collection of properties, such that each of the properties, if it holds, implies the negation of the tenet. Therefore, in order for the desired tenet to hold, the negation of each of the properties must hold. For example, given the tenet that we want to avoid harm to the human, we might derive a collection of (formally specified) properties that include that we remind the elderly person to eat their lunch, and that we ensure that the medication they take match what is prescribed.

At a high level, the detailed process is as follows (where the numbers on the left mark different cases, discussed below): Initialise root node with the negation of a tenet Repeat until all leaves of refinement tree are formalised Select a leaf node 0. If can be formalised then formalise it 1. Elseif can refine using domain knowledge then do so (see Section 3.2) 2. Elseif can refine using the goal tree then do so (see Section 3.3) 3. Else 3a. If can refine the goal tree by adding relevant goals then do so (see Section 3.4) 3b. Else elicit additional relevant domain knowledge (see Section 3.5) EndRepeat Return the negations of leaf nodes

We now consider the different cases. The “base case” is when the node is specific enough that it can be directly formalised (case 0), in which case we simply formalise the node. The other cases are: refining using domain knowledge (1), using the goal tree (2), and expanding either the goal tree (3a) or domain knowledge (3b). Note that in a situation where a node can be refined by applying either domain knowledge or the knowledge implied by the goal-tree, then the process prioritises the domain knowledge, since it is more general (associated with the problem and the domain), whereas the goal-tree is more specific (associated with a particular solution).

3.1 Formalisation (case 0)

If the node’s (informal) description is sufficiently specific that it can be directly formalised, then we refine the node, replacing it with a node containing an LTL formula.

Writing a formula may require some attention to the implementation, what information can be monitored, and in what (logical) form it is provided. For example, in issuing medicine, the implementation (i.e. the agent’s beliefs) may provide predicates such as indicating that medication was issued, and this predicate would be used in formalising a node specifying that correct medication was issued. On the other hand, if the belief used was, say, , indicating that medication was issued to patient on day and time , then this predicate would need to be used instead.

We assume that formalisation is possible in the following cases in our running example. Some of the formalisations below could be improved to be made more precise. For example, instead of we might require that should an emergency occur, an alert is sent within a certain time window.

We also assume the obvious compositional property, that a compound formula (e.g. a conjunction, or negation) can be formalised exactly when its sub-formulae can be formalised.

Informal Text Corresponding Formalisation
remind(X)
(where )
remind(drinkregularly)
issued prescribed
follow delegate-by-informing
monitor critical incident
monitor behaviour
out of charge
obey

Where we define (when is a meal), as being true if, at the time of that meal, either the person is eating, or the system is about to issue a reminder:

We also define as being true if the system regularly reminds the person to drink. Specifically, holds if, whenever it has been more than 2 hours (“”) since the last drink, either the system has issued a reminder within the last 15 minutes (“”), or the system is about to issue a reminder. In other words, if the person has not had a drink within the past two hours, then reminders are issued every 15 minutes (until they drink).

3.2 Refining with respect to Domain Knowledge (case 1)

When refining with respect to domain knowledge, we follow vLL’s process: given a domain knowledge rule of the form , and a node containing (where is in a positive context, and ), then we refine with333The notation denotes the result obtained by taking , and replacing the sub-expression with . (i.e. replace with ). This gives us the desired relationship between a node and its child: if then .

Similarly, if

occurs in a negative context (i.e. within the scope of an odd number of negations), then the rule

can be applied to replace with , which gives the desired implication relationship: since we have and hence as desired.

Note that in either case, if the new node is of the form then we break it into multiple nodes . For example, we can use the domain knowledge that keeping someone healthy means (Equation 6) that they have enough food, enough drink, and correct medication. When we refine the node keep healthy then we replace it with the node , which is then broken into three nodes.

Note that if the domain knowledge is in the form of a definition (), then, given , we can replace with in any context. Note that this rule could be applied repeatedly giving a loop, by replacing with . We avoid this by assuming that definitions are directional: defining something in terms of something else more specific, and that the human following the process only goes in the direction of increasing specificity. For example, it would make sense to refine “not enough food” to “3 meals a day”, but not the reverse.

However, what we want to identify as a result of this whole process is not just some ways in which the underlying tenet (e.g. “no harm”) can be violated, but all ways in which it can be violated. Therefore, when refining, we consider not just one domain rule of the form , but all such rules.

Specifically, after refining we ask the question: “is this complete?”. Specifically, given a node , and its refinements , the refinements are complete if (following vLL) . Of course, since nodes being refined have not yet been formalised, this cannot be checked formally, but informally, using the question444Note that if the domain knowledge rule used is of the form then by definition it is complete.: “if all of these refinements fail to hold, does the original node also necessarily fail to hold?”.

For example, when refining “not kept healthy”, we refine it into the three sub-nodes: “not enough food”, “not enough drink”, and “no/wrong medication”. We then consider the question: “is enough food, enough drink, and correct medication sufficient to guarantee good health?”. In this case we might consider that the answer is “no”, because health also requires exercise, and attention to psychological well-being (e.g. companionship, social activities, and a sense of meaning). These additional nodes could be added (not shown in Figure 5) and further elaborated.

3.3 Refining with respect to Goals (case 2)

When refining with respect to goals we derive the process by considering the goal-tree as specifying “implied” domain knowledge. In essence, a goal-subgoal relationship of the form (i.e.  is a sub-goal of ), is read as implying domain knowledge that bringing about implies . In the case where has multiple children, then when the decomposition is OR each child implies the parent, and when the decomposition is AND then the conjunction of all the children implies the parent.

If the (parent) goal appears in the node in a positive context (i.e. it is not in the scope of a negation555Or, more precisely, it is in the scope of an even number of negations.), then we simply use this domain knowledge. Specifically, we use the implied knowledge of the form to create a refined node . There are two sub-cases: if is OR-refined, then the implied domain knowledge consists of rules for each child of in the goal tree. In this case, we collect the rule applications, giving us multiple children of , each of the form . In the second case, where is AND-refined in the goal-tree, we create a single child of : . Note that if only has a single child, then both sub-cases are equivalent: we add a refined node .

For example, given a node keep safe we can refine it using the goal-tree. Recall (Figure 3) that the goal “keep safe” has two (AND-refined) sub-goals (“monitor” and “accompany”). Therefore, we have implied domain knowledge that . This can be applied, in a similar way to other domain knowledge, to refine keep safe to monitor accompany.

However, what can we do if the node is actually negated, e.g. keep safe? In this case we need to consider strengthening the goal-tree.

The domain knowledge that we need to refine the node into more specific sub-nodes is now of the form (where “” denotes an appropriate logical formula combining ’s children). However, this is not what the goal-tree implies, it implies and hence . In order to be able to use the goal-tree in this situation, we need to consider cases where the goal decomposition in the goal tree is essential, that is, where the decomposition of into is such that not only does imply , but it is actually necessary not just sufficient, so that we have . Then, if appears in a negative context in the node then we use the implied knowledge of the form .

There are two cases. If is OR-refined, then we have and hence and we replace (in a negative context in ) with . The second case is when is AND-refined, in which case we have , hence and we replace (in a negative context in ) with , which can be split into multiple sub-nodes .

For the example goal-plan tree we assume that this strengthened relationship holds for each of the following nodes and their children: monitor, keep safe, and harm (also shown as bold labels in shaded nodes in Figure 4). So, for the example we would be able to refine keep safe into two nodes: accompany, and monitor.

Finally, for the “ appears in a negative context in ” case, there is one special case to consider, where instead of , we have , i.e. instead of appearing in , actually appears in . In this case we apply the same logic as the other negative cases, but, because doesn’t actually appear in (and isn’t necessarily a negation), we cannot just replace with . Instead, we replace with where is the appropriate logical combination of the children of . For example, if has a single child, , then the implied domain knowledge (assuming a strengthened relationship) is and hence . We then have that is implied by (and hence can be replaced by) , because and .

These cases are summarised in the following table. Note that when refining, we want to make things more specific, so we always refine a goal in terms of its children, rather than its parent.

Goal Tree Implied Domain Knowledge
(where )
(where )

3.4 Expanding the Goal Tree (case 3a)

There is one slight complication when refining with respect to goals. This is a situation where the goal-tree is “missing” a goal. That is, where instead of having ( has as a parent), has a different, more general, parent goal. For instance, in the goal-tree of Figure 3 the two goals “Keep Healthy” and “Keep Safe” have as their parent the (more general) goal “Support”, rather than a more specific goal “Keep from harm”. This means that if the refinement tree has a node relating to harming someone, then we cannot use the goal tree to refine the node.

In this situation, where we might want to use the child goals to refine nodes, we may need to conceptually consider additional “phantom” nodes in the goal-tree. This is done by adding intermediate nodes to the tree, as illustrated below.

The resulting goal-tree is shown in Figure 4 (on page 4). It adds an intermediate node “harm”.

for tree=ellipse,draw,l=2cm,[support [harm,fill=lightgray [keep healthy [remind to eat ][remind to drink ][remind medication ] ][keep safe,fill=lightgray [monitor,fill=lightgray [monitor behaviour ][monitor critical incident ] ][accompany excursion ] ] ][assist [follow orders ][remind fridge ] ][recharge ] ]

Figure 4: Revised Example Goal Tree (bold shaded nodes indicate ones with strengthened relationship to their children)

3.5 Expanding the Domain Knowledge (case 3b)

Finally, if neither case applies, then we can proceed by expanding domain knowledge. This is done by considering the node and asking “how can this occur?”, or “what does this mean?” (specifically: “what counts as this node?”). The new domain knowledge can then be used to continue the process.

4 Results

For the running example, starting with the high-level tenet “do not harm”, the given goal tree, and the domain knowledge, following the process might666Since the process is a design process, followed by humans, it is not deterministic. yield the refinement tree shown in Figure 5. Annotations in the tree of the form “d” indicate that the node was refined using domain knowledge rule . An annotation of “g” indicates that the goal tree was used, and “f” indicates that the node was able to be formalised.

for tree=draw,grow’=0,l=2cm,anchor=west,child anchor=west,[harm:g [keep healthy:d6 [enough food:d3 [3 meals a day:d9 [do(breakfast)
do(lunch)
do(dinner):d2,align=center,base=bottom [do(breakfast):d2 [remind(breakfast):f [ ] ] ] [do(lunch):d2 [remind(lunch):f [ ] ] ] [do(dinner):d2 [remind(dinner):f [ ] ] ] ] ] ] [enough drink:d4 [1.2L/day:d7 [do(drinkregularly):d2 [remind(drinkregularly):f [ ] ] ] ] ] [correct medication:d8 [issued prescribed:f [(issued(m)prescribed(mp)neq(m,mp)) ] ] ] ] [keep safe:g [monitor:g [monitor behaviour:f [ (deterioratedalerted) ] ] [monitor critical incident:f [ (emergencyalerted) ] ] ] [accompany excursion:d5 [follow or delegate-by-informing:f [ (leavefollowinform) ] ] ] ] ]

Figure 5: Refinement Tree for Tenet “do not harm”

This figure shows a process that began with the root node “harm” (the negated tenet), and was then refined by considering the goal-tree, where the goals “keep healthy” and “keep safe” both interfere with the negated tenet (and where the goal tree had been refined to create a “do not harm” parent to these goals).

Each of these goals is then refined by eliciting additional domain knowledge and applying that. Specifically, that being healthy involves having enough to eat, enough to drink, and taking medication; and that being safe involves ensuring adequate monitoring when leaving the house, and adequate monitoring to detect any health emergencies or accidents.

We then further refine these nodes with (existing) domain knowledge. For instance, that “not enough food” means fewer than three meals a day. We also use domain knowledge that ensuring (e.g.) three meals a day can be achieved by issuing appropriate reminders. Then, since this is sufficiently specific, we formalise this. Similarly, we can refine not having enough drink to drinking regularly, and formalise it in terms of reminders.

Then we can take the negated leaf nodes of the refinement tree, which gives us the following formulae for verification:

where

5 Next Steps

This working note presented a high-level process that can be used by a human designer to systematically identify a set of verification properties.

However, there is still much work to be done. In particular, we need to find answers to the following questions:

  • Can this process be used in practice by designers (other than the author of this paper)?

  • Can this process be used for a wider range of examples, including larger and more complicated ones?

  • How effective is this process at identifying important properties to be verified?

  • How can this process be supported by tools?

Other areas for future work include: richer representation (e.g. logic with probabilities, other design models, graphical model for domain knowledge); priorities between tenets, perhaps linking to ideas on human values (e.g. [8]); and dealing with conflict between goals, for instance in the running example we want the system to obey the user, but what if the user asks the system to stop issuing reminder to drink?

References

  • [1] R.S. Boyer and J.S. Moore, editors. The Correctness Problem in Computer Science. Academic Press, London, 1981.
  • [2] E.M. Clarke, O. Grumberg, and D.A. Peled, editors. Model Checking. The MIT Press, 2000.
  • [3] R.A. DeMillo, R.J. Lipton, and A.J. Perlis. Social processes and proofs of theorems of programs. ACM Communications, 22(5):271–280, 1979.
  • [4] J.H. Fetzer. Program verification: The very idea. ACM Communications, 31(9):1048–1063, 1988.
  • [5] Savas Konur, Michael Fisher, Simon Dobson, and Stephen Knox. Formal verification of a pervasive messaging system. Formal Aspects of Computing, 26(4):677–694, 2014. doi:10.1007/s00165-013-0277-4.
  • [6] Lin Padgham and Michael Winikoff. Developing intelligent agent systems: A practical guide. John Wiley & Sons, Chichester, 2004. ISBN 0-470-86120-7.
  • [7] Ulrich Reiser, Christian Pascal Connette, Jan Fischer, Jens Kubacki, Alexander Bubeck, Florian Weisshardt, Theo Jacobs, Christopher Parlitz, Martin Hägele, and Alexander Verl. Care-o-bot 3 - creating a product vision for service robot applications by integrating design and technology. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1992–1998. IEEE, 2009. doi:10.1109/IROS.2009.5354526.
  • [8] SH Schwartz. An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2(1), 2012. doi:10.9707/2307-0919.1116.
  • [9] A. van Lamsweerde. Goal-Oriented Requirements Engineering: A Guided Tour. In Proceedings of the 5th IEEE International Symposium on Requirements Engineering (RE’01), pages 249–263, Toronto, August 2001.
  • [10] Axel van Lamsweerde and Emmanuel Letier. Handling Obstacles in Goal-Oriented Requirements Engineering. IEEE Trans. Software Eng., 26(10):978–1005, 2000. doi:10.1109/32.879820.
  • [11] Michael Winikoff and Lin Padgham. Agent Oriented Software Engineering. In Gerhard Weiß, editor, Multiagent Systems, chapter 15, pages 695–757. MIT Press, 2 edition, 2013.