I Introduction
Goaloriented requirement engineering (GORE) [34] is an essential phase of the software development life cycle, the important task of which is to attain correct software requirements specifications. Many researches have demonstrated the significant advantages that formal and goaloriented approaches help generate correct specifications [1, 7, 11]. In such approaches, domain properties and goals are represented in lineartime temporal logic (LTL) because LTL is proved convenient for abstracting specifications of a large class of requirements, assumptions, and domain properties [34].
The identifyassesscontrol cycle in GORE aims at identifying, assessing, and resolving inconsistency in which the goals of the requirement cannot be satisfied as a whole. The divergence is a weak inconsistency, i.e., particular circumstances where the satisfaction of some goals inhibits the satisfaction of others. A divergence is captured by boundary conditions (BCs) which explain why the divergence happens. Various approaches [31, 10, 9] have been proposed to automatically identify BCs in the context of GORE.
As the number of identified BCs in the identification stage increases, for example, there are more than BCs in the case named London Ambulance Service in [9], the assessment stage and the resolution stage become very expensive, and even impractical. In order to provide engineers with an acceptable number of BCs to analyze, the generality metric (Definition 2) [9] has been proposed to automatically filter out the less general BCs. The generality metric qualitatively distinguishes the importance of BC using the implication relationship of BCs. Intuitively, a more general (also known as weaker) BC is more important because it potentially covers more circumstances to represent a divergence. Therefore, the less general BCs can be filtered out by the more general one.
Unfortunately, we observe that a set of general BCs still retains a large number of redundant BCs. The reason is that the generality metric can be considered as a coarsegrained metric. A general BC potentially captures redundant circumstances that do not lead to a divergence.
Furthermore, the accuracy of the assessment step based on likelihood is sensitive to the redundant circumstances, so a set of general BCs can lead to mistakes in the assessment step (an example shown in Section III). The assessment stage is concerned with evaluating how likely the identified conflicts are, and how likely and severe are their consequences. Degiovanni et al. [8] proposed an automatically assessing method based on model counting, which can be used to prioritize BCs to be resolved. However, a set of general BCs misleads to prioritize the BCs because a general BC potentially captures redundant circumstances that do not lead to a divergence.
In this paper, we present a new metric to assess the differences among the divergences captured by BCs. Our approach is novel in the following respects: (1) It is a finegrained metric because it can filter out not only the less general BCs but also the BCs that capture the same divergence; (2) and it measures the differences between BCs from the different divergences captured by them. We first introduce the concept of contrasty of BCs motivated by avoiding boundary conditions [31] in resolving divergences. More precisely, given two BCs and , we consider whether and are BCs. (resp. ) represents the circumstances left by removing the circumstances captured by (resp. ) from that captured by (resp. ). If neither nor is BC, and are contrastive. Intuitively, if two BCs are contrastive, they capture different divergences. We argue that a set of contrastive BCs should be recommended to engineers, rather than a set of general BCs since they potentially only indicate the same divergence.
Based on the contrasty metric, we design a postprocessing framework (PPFc) to produce a set of contrastive BCs after identifying BCs. Experimental results show that the contrasty metric can filter out all the BCs that capture the same divergence, which dramatically reduces the number of BCs recommended to engineers. Furthermore, experiments show that the BCs identified by the stateoftheart method are not contrastive in most cases. In other words, these BCs capture the same divergence, in which engineers only consider one BC to resolve a divergence while others are redundant.
In order to improve efficiency, we propose a joint framework (JFc) to interleave assessing based on the contrasty metric with identifying BCs. Specifically, when a BC is identified during the search, we add its negation as an additional constraint to domain properties. The additional constraint makes the domain properties dynamically change so that it prevents the same circumstances from being identified as a BC again. The insight behind this is that it produces the search bias towards the BCs that capture different divergences. Besides, we propose a sufficient condition for the case where there not exist BCs. It guarantees that if we resolve the divergences captured by the BCs in the set of contrastive BCs, there not exist divergences under the original domain properties and goals. Experiments confirm the improvements of JFc in identifying contrastive BCs.
Our main contributions are summarized as follows.

We present the novel contrasty metric to evaluate the differences between BCs , which can filter out more redundant BCs that capture the same divergence.

We design a postprocessing framework (PPFc) to produce a set of contrastive BCs. In order to improve efficiency, we also design a joint framework (JFc) to capture different divergences during the search.

Experiments show that the contrasty metric is better than the generality metric for filtering out redundant BCs.
Ii Background
In this section, we introduce the background of goalconflict analysis and lineartime temporal logic. We briefly recall some basic notions for the rest of the paper.
Iia GoalConflict Analysis
In GORE [34], goals are prescriptive statements that the system must achieve, and domain properties are descriptive statements that capture the domain of the problem world. In practice, it is unrealistic to require requirements specifications to be complete or all goals to be satisfiable, because inconsistencies may occur. Goalconflict analysis [32, 34] deals with the inconsistencies via the following identifyassesscontrol cycle:

the identification stage is to identify a condition whose occurrence makes some inconsistencies;

the assessment stage is to assess and prioritize the identified inconsistencies according to their likelihood and severity;

the resolution stage is to resolve the identified inconsistencies by providing appropriate countermeasures.
GoalConflict Identification. In this paper, we focus on a weak inconsistency, i.e., divergence. A divergence essentially represents a boundary condition (BC) whose occurrence results in the loss of satisfaction of the goals, which makes the goal divergence [31].
Definition 1.
Let be a set of goals and a set of domain properties. A divergence occurs within iff there exists a boundary condition under and such that the following conditions hold:
(logical inconsistency)  
(minimality)  
(nontriviality) 
where and .
Intuitively, a BC captures a particular combination of circumstances in which the goals cannot be satisfied as a whole. The logical inconsistency property means the conjunction of goals becomes inconsistent when holds. The minimality property states that disregarding any of the goals no longer results in inconsistency. The nontriviality property forbids a BC to be a trivial condition which is the negation of the conjunction of the goals. Note that BCs are not false due to the minimality property.
Specifying software requirements in the LTL formulation allows us to employ automated LTL satisfiability solvers to check for the feasibility of the corresponding requirements. With an efficient LTL satisfiability solver, we can automatically check if the generated candidate formulae are valid BCs or not by checking if they satisfy the properties.
In the identification stage, the generality [9] metric has been proposed to reduce the redundant BCs. It is defined as follows.
Definition 2.
Let be a set of BCs. A BC is more general than another BC if implies .
Intuitively, a more general BC captures all the particular combinations of circumstances captured by the less general BCs than . Therefore, it is important to provide engineers with more general BCs. As far as we know, the generality metric is the only metric to filter out BCs.
GoalConflict Assessment.
In the assessment stage, in order to give engineers more guidance on which BCs need to get attention, probabilities
[3] of their occurrence are considered as an important indicator. For systems without extra probabilistic information, there is an approach [8] based on model counting to analyze the likelihood of BCs. It is defined as follows.Definition 3.
Let be a BC, domain properties, and a positive integer. The likelihood of is where denotes that the total number of models bases of length satisfying constraints in .
Intuitively, the larger likelihood of a BC indicates that the divergence captured by the BC is more likely to happen.
GoalConflict Resolution. In the resolution stage, as the BCs malfunction the system when the system reaches the circumstances captured by BCs, the engineers need some strategies to resolve the divergences captured by the BCs.
Definition 4.
Let be domain properties, goals, and a BC under and . Resolving divergences aims to modify and to get and , so that under and does not fulfill at least one of the following constraints:

;

;

.
Intuitively, after resolving divergences, the circumstances captured by the BC do not happen under the new system expressed by updated domain properties and goals. Van Lamsweerde et al. [31] proposed that generating reasonably updated domain properties and goals is an open problem because it requires a lot of experience. We will illustrate an example of resolving divergences in VIII. Therefore, a large number of identified BCs make the resolution stage very expensive.
A straightforward strategy can be adopted to avoid the circumstances captured by a BC. The avoid pattern [31] is therefore introduced: where denotes a BC to be inhibited.
IiB LinearTime Temporal Logic
LinearTime Temporal Logic (LTL) [29] is widely used to describe infinite behaviors of discrete systems, which is suitable for specifying software requirements [31]. Throughout this paper, we use lower case letters (e.g., , ) to denote propositions. The syntax of LTL for a finite set of propositions includes the standard logical connectives (, , ), , and temporal operators next (), until ().
Operator release (), eventually (), always (), and weakuntil () are commonly used, and can be defined as , , , and , respectively. We use to denote the size of the formula , i.e., the number of temporal operators, logical connectives, and literals in .
LTL formulae are interpreted over a lineartime structure. A lineartime structure is a pair of where is a state sequence and is a function mapping each state to a set of propositions. Let be a lineartime structure, a position, and , two LTL formulae. The satisfaction relation is defined as follows:
iff  
iff  
iff  
iff  
iff  
An LTL formula is called satisfiable if and only if there is a lineartime structure (model) satisfying . An LTL formula implies an LTL formula , noted , if the models of are also models of . The LTL satisfiability problem is to check whether an LTL formula is satisfiable, which is PSPACEcomplete [30]. Recently, LTL satisfiability checkers based on different techniques have been developed. Among these checkers, nuXmv [6] and Aalta [22] have achieved better performance.
Iii Motivating Example
In this section, we will illustrate the drawbacks of the generality metric through an example and discuss the insights behind the contrasty metric. Below we illustrate an example, MinePump [20].
Example 1.
Consider a system to control a pump inside a mine. The main goal of the system is avoiding flood in the mine. The system has two sensors. One detects the high water level (), the other detects methane in the environment (). When the water level is high, the system should turn on the pump (). When there is methane in the environment, the pump should be turned off. Domain property () and goals () are represented via the following LTL formulae.
Domain Property:

Name: PumpEffect ()
Description: The pump is turned on for two time steps, then in the following one the water level is not high.
Formula:
Goals:

Name: NoFlooding ()
Description: When the water level is high, the system should turn on the pump.
Formula: 
Name: NoExplosion ()
Description: When there is methane in the environment, the pump should be turned off.
Formula:
Although the specification is consistent, i.e., all domain properties and goals can simultaneously be satisfied, this specification exhibits some goal divergences. One of the BCs is , which captures the circumstances where the high water level and the methane occur at the same time. Under this situation, two goals are unsatisfiable simultaneously within domain property.
We also consider other two BCs and . captures the circumstances that the water level is high and the methane occurs at the beginning. Through equivalent transformation, we can obtain . Clearly, captures five circumstances, where, in the future, the system will migrate from the state where the high water level occurs, the methane does not occur, and the pump is turned on () to the state described as follows.

the high water level does not occur, the methane occurs, and the pump is not turned on ();

the high water level and the methane occur and the pump is turned on ();

the high water level and the methane do not occur and the pump is not turned on ();

the high water level and the methane occur and the pump is not turned on ();

the high water level occurs, the methane does not occur, and the pump is not turned on ().
Existing methods can search for a large number of BCs. It makes the assessment and resolution stages very expensive, and even impractical. In order to provide engineers with an acceptable number of BCs to be analyzed, it is necessary to proposed metric to filter out the redundant BCs.
If we apply the generality metric, we filter out because is more general than . However, the generality metric cannot evaluate and since the generality relationship between them does not hold. In the assessment stage, if we compute the likelihood based on the method [8]
, we can classify
as being more likely than in the long term, and engineers should prioritize in the search for mechanisms that would allow us to reduce the chances of reaching .Unfortunately, the assessment method [8] lacks the accuracy to compute the likelihood by model counting because some models are meaningless, i.e., there does not exist the circumstances to lead the divergence in reality. Considering the circumstances captured by , we observe that the circumstances (1), (3), (4), and (5) violate . Therefore, they cannot satisfy the minimality of BC, which means that they cannot capture the divergence in reality. These circumstances are redundant, so stands for the circumstances captured by . We find that is more likely than using the assessment method [8], so should be prioritized. Situations like this show that the accuracy of the assessment method based on likelihood is sensitive to redundant circumstances.
In addition, we find that , , and capture the same divergence, in which the high water level and the methane occur at the same time, i.e., the circumstance captured by . It is very useful to identify the BC like in resolving divergences. Engineers only resolve instead of resolving first and then . It avoids wasting computing resources caused by assessing and resolving redundant BCs.
In this paper, motivated by avoiding boundary conditions [31] in resolving divergence, we introduce the concept of witness (Definition 5) and contrasty (Definition 6) of BCs. Intuitively, the witness of a BC indicates the cause of divergence. If the two BCs are not mutual witnesses, then the two BCs are contrastive, i.e., they capture different divergences. In this case, and are not contrastive because is a witness of , but not vice versa, which means that the divergences captured by are wider than that captured by . Therefore, we recommend to engineers and filter out .
Iv Identifying Boundary Conditions with Contrasty Metric
In this section, we first introduce the concept of contrasty of BCs. Then, we design a postprocessing framework to identify a set of contrastive BCs.
Iva Contrasty
We first introduce the concepts of witness and contrasty.
Definition 5.
Let be an LTL formula and a BC. is a witness of iff is not a BC.
In the definition, motivated by avoiding boundary conditions [31] in resolving divergences, we use a negative LTL formula to avoid some circumstances, i.e., resolving the divergence. Therefore, the witness of a BC indicates why is a BC. If is a BC, it means that the divergence captured by is also captured by .
Definition 6.
Let and be BCs. and are contrastive, iff is not a witness of and is not a witness of .
Definition 7.
Let be a set of BCs. is contrastive, iff , and is contrastive.
Intuitively, the contrastive BCs capture different divergences. We use an example to illustrate the definition of witness and contrasty.
Example 2 (Example 1 cont.).
, , and . Because is also a BC, e.g., it captures the circumstances where the high water level and the methane occur at the beginning, is not a witness of . is a witness of since does not satisfy the minimality constraint of BC, i.e., is unsatisfiable. Therefore, and are not contrastive. is a witness of and is not a witness of , so and are not contrastive. Intuitively, is more important than since the divergence captured by is wider than that captured by ( is a special case of ). and are contrastive since they express the occurrence of the high water level and the methane in different situations.
Based on the definition, we have the following theorems. These theorems indicate the highlight of the contrasty metric.
Theorem 1.
Let and be BCs. If , then is a witness of .
It is straightforward to prove Theorem 1 because is unsatisfiable. Because of Theorem 1, we have Theorem 2.
Theorem 2.
Let be a set of contrastive BCs. , .
Theorem 2 shows that there is not a general relation between any two BCs in a contrastive BC set, while there can be a witness relation between some two BCs in a general BC set. According to Theorem 2, the contrasty metric can be regarded as a finergrained metric than the generality metric because contrasty metric can filter out more redundant BCs than the generality metric. Let us recall Example 1. is general, but not contrastive. If the contrasty metric is considered, then is a contrastive.
Property 1.
Let and be BCs. If is a witness of and is not a witness of , then resolving the divergence captured by leads to resolving the divergence captured by .
Property 1 shows that it is reasonable that engineers prioritize to resolve since the circumstances captured by include the circumstances captured by .
Theorem 3.
Let and be two BCs. If and are contrastive, then and capture different divergences.
Sketch of proof.
and are contrastive, so (resp. ) is not the witness of (resp. ), which means that (resp. ) is still a BC. The primary intuition behind (resp. ) is that after resolving the divergences captured by (resp. ), there are still divergences captured by (resp. ). Therefore, and capture different divergences. ∎
Theorem 3 shows that contrastive BCs capture different divergences. Therefore, it is meaningful to recommend a set of contrastive BCs to engineers.
According to the above analysis, we argue that a set of contrastive BCs should be recommended to engineers, rather than a set of general BCs since they potentially only indicate the same divergences. In Section V, we will discuss the different divergences captured by contrastive BCs and report the advantage of the contrasty metric.
IvB PostProcessing Framework
We design a postprocessing framework for filtering the BCs based on the contrasty metric (PPFc). It takes a set of BCs () identified by a BC solver as inputs. Its output is a set of contrastive BCs ().
The pseudo code is outlined in Algorithm 1. At each iteration, we choose a BC (Alg. 1 of line 1), then discuss its relationship with other BCs in (Alg. 1 of line 1). If and are witnesses of each other (Alg. 2 of line 2), which means that and capture the same divergences, we select the one with smaller size^{1}^{1}1The BC with smaller size is more compact, and easier to interpret. to stay in . If is a witness of and is not a witness of (Alg. 2 of line 2), which means that the divergences captured by is wider than that captured by , we retain ; otherwise (Alg. 2 of line 2), we remove . If and are not witnesses of each other, we do not delete any one because they are contrastive.
Theorem 4.
When Algorithm 1 terminates, is contrastive.
It is straightforward to prove Theorem 4. Theorem 4 guarantees that Algorithm 1 returns a set of contrastive BCs. We illustrate our method through a running example as follows.
Example 3 (Example 1 cont.).
Assume that the BC solver returns the set of BC , where , , and . is initialized to . At the first iteration, assume that PPFc chooses . will be compared with and . Because is not a witness of and is a witness of , externalContrastyFilter returns False and an empty set. will be updated as . At the second iteration, assume that PPFc chooses . will be compared with . Because is a witness of and is not a witness of , externalContrastyFilter returns True and . will be updated as . Then PPFc returns and terminates.
IvC Discussion about completeness and Performance
In this paper, we are not concerned with the completeness of identifying contrastive BCs, i.e., the divergences captured by contrastive BCs cover all the divergences captured by BCs that have been found. The reason is as follows.
Firstly, we focus on filtering out redundant BCs for better resolving divergences which is the fundamental purpose of GORE. In general, the better the identification result is, the easier the resolution stage is. Therefore, we argue that the identified BCs should be conducive to resolving divergences as much as possible rather than completeness.
Furthermore, the completeness of the BC set does not help to resolve divergences. Resolving divergences is a dynamic process. After resolving a BC, some BCs in the original BC set are no longer BCs under the updated domain properties and goals. In this way, for resolving divergences, it is meaningless to get the complete BC set in the BC identification stage.
For example, a set of general BCs fulfills the completeness, but it still retains a large number of redundant BCs that capture the same divergences, so that engineers will do a lot of meaningless work for resolving divergences. In other words, although the generality metric satisfies the completeness, it will also increase the burden of resolving divergences. By comparison, the contrasty metric first considers the optimization of BC resolving divergences.
PPFc only begins to filter out redundant BCs after the BC solver returns a set of BCs. A natural idea is to directly identify contrastive BCs during searching for BCs. In this way, pruning the BCs capturing the same divergence can be performed directly in the search process, thereby speeding up the searching process. Based on this idea, we will discuss a joint framework for identifying contrastive BCs in Section VI.
V Evaluation of Contrasty
In this section, we reported the advantage of the contrasty metric. Here, we presented the first research question.
Rq 1.
Compared with the generality metric, what are the advantages of the contrasty metric?
Given a set of BCs identified by a BC solver, we applied different metrics to filter out redundant BCs. For the competitor, we combined the generality metric and the likelihood to filter and sort the BCs. Specifically, we first filtered out the less general BCs to produce a set of general BCs and then sorted them according to the likelihood of BC from high to low. Based on PPFc, we computed a set of contrastive BCs and sorted them by the likelihood. We analyzed the shortcomings of the generality metric and reported the advantages of the contrasty metric by comparing and .
Case  #Dom  #Goal  #Var  Size  

RetractionPattern1 (RP1)  0  2  2  9  
RetractionPattern2 (RP2)  0  2  4  10  
Elevator (Ele)  1  1  3  10  
TCP  0  2  3  14  
AchieveAvoidPattern (AAP)  1  2  4  15  
MinePump (MP)  1  2  3  21  
ATM  1  2  3  22  
Rail Road Crossing System (RRCS)  2  2  5  22  
Telephone (Tel)  3  2  4  31  
London Ambulance Service (LAS)  0  5  7  32  
Prioritized Arbiter (PA)  6  1  6  57  
Round Robin Arbiter (RRA)  6  3  4  77  
Simple Arbiter (SA)  4  3  6  84  
Load Balancer (LB)  3  7  5  85  
LiftController (LC)  7  8  6  124  

6  21  16  415 
Case  GL  CL  

Rank  BC  Rank  BC  Witness  
RP1  1  1  1,2,3,4  
2  
3  
4  
RP2  1  1  1,2,3  
2  
3  
Ele  1  1  1,3  
2  2  2,4  
3  3  2,3,5  
4  
5  
TCP  1  1  1,2,3  
2  
3 


AAP  1  1  1,2,3,5  
2  2  4,5  
3  
4  
5  
MP  1  1  1,2,3,4,5,6  
2  
3  
4  
5  
6  
ATM  1  1  1,2,3,4,5,6,7,8  
2  
3  
4  
5  
6  
7  
8  
RRCS  1  1  1,2,3,4  
2  
3  
4  
Tel  1  1  1,2,3,4  
2  
3  
4  
RRA  1  1  1,2,3,4,5  
2  
3  
4  
5 

Va Benchmarks
We evaluated contrasty on different cases introduced by [9]. The details of each case are shown in Table I including the numbers of domain properties (column ‘#Dom’), goals (column ‘#Goal’), variables (column ‘#Var’), and the total size of all formulae (column ‘Size’) for the specification of each case. The order of the cases is sorted by the size of all formulae from small to large.
VB Experimental Setups
We used the following experimental setups.

We employed the stateoftheart BC solver^{2}^{2}2http://dc.exa.unrc.edu.ar/staff/rdegiovanni/ASE2018.html [9] denoted by GA
to identify BCs. It is based on a genetic algorithm to search BCs.

We followed the configuration of GA described in the paper [9] including the size of the initial population generated from such a specification and the limit of generations, i.e., evolutions of the genetic algorithm population.

For each case, we ran the algorithm times and reported the mean data.

All the experiments were run on the GHz Intel E, with GB memory under GNU/Linux (Ubuntu ).
VC Experimental Results
Table III summarizes the number of BC in (‘’), (‘’), and (‘’), where the column ‘#suc.’ means the number of successful runs (out of 10 runs). If GA fails in all 10 runs, the results are marked by ‘N/A’. Overall, our method can solve all the cases that can be solved by GA to identify BCs. Clearly, if the solver cannot identify BCs, our method cannot perform the postprocessing.
Case  #suc.  

RP1  37.1  3.2  1.2  10 
RP2  35.1  2.6  1.2  10 
Ele  28  3.2  2.6  10 
TCP  53.9  2.1  1.5  10 
AAP  50.3  3.7  1.8  10 
MP  40.7  4.5  1.4  10 
ATM  64.4  3.4  1.2  10 
RRCS  27.9  3  1  10 
Tel  36.5  3  1  2 
LAS  N/A  N/A  N/A  N/A 
PA  N/A  N/A  N/A  N/A 
RRA  40.571  3.14  1  7 
SA  N/A  N/A  N/A  N/A 
LB  N/A  N/A  N/A  N/A 
LC  N/A  N/A  N/A  N/A 
AMBA  N/A  N/A  N/A  N/A 
For most cases, GA returns a large number of BCs thanks to the development of searchbased methods. Note that such a large set of BC can cause a huge burden in the assessment stage and the resolution stage. Seeing the columns ‘’ and ‘’, we observe that the size of is much smaller than that of for all cases. It means that, compared with the generality metric, the contrasty metric can considerably reduce the number of BCs to be analyzed by engineers.
Table II summarizes the results of the different metrics, for the BCs identified for each of the case studies. We selected the data that GA got the most number of BC from times experiments for display. The column ‘GL’ (resp. ‘CL’) illustrates the BCs (‘BC’) in (resp. ) and their rank (‘Rank’) based on the likelihood metric. We also use the column ‘Rank’ as the identification of BCs. For every BCs in , we report which BCs in (‘Witness’) is a witness of and the identification of in is marked in red.
For all cases, is much smaller than and is a subset of , which confirms that the contrasty metric is a more finergrained metric than the generality metric. The results also show that a set of general BCs still retains the BCs that represent the same divergence. Particularly, for MP, ATM, and RRA, the redundant BCs are too much to assess and resolve divergences efficiently.
From the column ‘Witness’, every BC in can find a witness of it in . This observation means that the BCs in capture all the divergences captured by the BCs in . Therefore, engineers only need to consider the BCs in when resolving divergences. In addition, we also observe that the contrastive BCs rank lower in in Ele, TCP, AAP, MP, RRCS, TEL, and RRA. The reason, as mentioned above, is that the circumstances that cannot describe the divergence lead to mistakes of likelihood. Such mistakes are serious, which will prevent engineers from grasping the main cause of the divergence quickly. It leads to costly assessing and resolving the same divergence repeatedly.
In summary, the generality metric cannot capture the difference between BCs. Surprisingly, lots of BCs identified by the stateoftheart BC solver are redundant in most cases. It puts an expensive burden on assessing and resolving divergences. The method we propose can compare this well and give a recommendation that is more conducive to saving the costs of assessing and resolving divergences.
Vi Joint Framework
In this section, we design a joint framework to interleave filtering based on the contrasty metric with identifying BCs (JFc). We first introduce the termination condition for identifying BCs and then propose JFc.
Motivated by the blocking clause approach to solving AllSAT problem [25], we consider excluding the circumstances captured by identified BCs in the search process to generate a search bias towards the BCs that capture different divergences. Specifically, in the process of searching for BCs, once a BC is identified, we add as an additional constraint to domain properties. The additional constraint makes the domain properties dynamically change so that it can prevent the same circumstances from being identified as a BC again (Theorem 7). Moreover, we will prove that the BCs under the additional constraint are also BCs under the original domain properties and goals (Theorem 6).
Before introducing JFc, We first propose a sufficient condition for the case where there does not exist a BC (called BC termination condition).
Theorem 5.
Let be domain properties and goals. If , then there does not exist a BC under and .
Sketch of proof.
We prove that if there exists a BC, then . If there is a BC under and , then (logical inconsistency) and (minimality). Because of the logical inconsistency, we have . Therefore, . Consider the minimality, we have , i.e., . ∎
Based on Theorem 5, we can check whether there still exists a BC under the dynamical domain properties and goals.
JFc takes the domain properties and goals as inputs. Its output is a set of contrastive BCs . The pseudo code is outlined in Algorithm 3. In order to identify BCs, we involve existing BC solvers, e.g., GA [9] and Tab [10] (Alg. 3 of line 3). Note that we consider the dynamical domain properties (). If the BC solver terminates, we return (Alg. 3 of line 3). Otherwise, unlike PPFc, we update when identifying a new BC (Alg. 3 of line 33). Note that we only remove the BCs which the new BC is a witness of (Alg. 4 of line 4) because none of the BCs in is a witness of the new BC (Theorem 7). Afterward, if there still exists a BC under and , we continue to involve BC solver; otherwise, return (Alg. 3 of line 3).
Theorem 6.
Let be domain properties, goals, and a set of BCs that has been identified. A LTL formula is a BC under and , if is a BC under and .
Sketch of proof.
Because is a BC under and , we have . Therefore, (logical inconsistency) holds. Because , (minimality) holds. The nontriviality obviously holds. ∎
Theorem 6 shows that although the additional constraint is considered, the results are still BCs under the original domain properties and goals.
Theorem 7.
In Algorithm 3, s.t. is a witness of .
Sketch of proof.
We prove Theorem 7 by inductive hypothesis as follows.

At the first iteration where is an empty set, assume we get a BC , Theorem 7 holds.

We suppose that at the th iteration where we get a BC , Theorem 7 holds.

At the +th iteration where , assume we get a BC . Because is a BC under and , . Therefore, for every , is a BC under and . Because of Theorem 6, is a BC under and .
∎
Intuitively, based on Theorem 7, JFc can produce a search bias towards the BCs that capture different divergences.
Theorem 8.
In Algorithm 3, the BCs in the final are not witnesses with each other.
It is straightforward to prove Theorem 8 because of Theorem 7 and Algorithm 4. Theorem 8 guarantees that Algorithm 3 returns a set of contrastive BCs.
Case  PPFc  JFc  

GA t. (s)  t. (s)  #suc.  #T  t. (s)  #suc.  
RP1  37.1  1.2  157.4  224.53  10  1  1  10  29.5  10 
RP2  35.1  1.2  130.2  206  10  1.1  1.1  10  78.9  10 
Ele  28  2.6  45.8  88.01  10  2.1  2.1  10  43.4  10 
TCP  53.9  1.5  225.1  308.26  10  1.4  1.4  0  801.6  10 
AAP  50.3  1.8  65.3  208.64  10  1  1  10  41.3  10 
MP  40.7  1.4  59.3  146.02  10  1  1  10  60.8  10 
ATM  64.4  1.2  102.2  259.19  10  1  1  10  25.2  10 
RRCS  27.9  1  68.3  91.87  10  1  1  10  15  10 
Tel  36.5  1  35.3  46.53  2  1  1  10  27  10 
LAS  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
PA  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
RRA  40.571  1  696.43  878.7  7  1  1  10  255.1  10 
SA  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
LB  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
LC  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
AMBA  N/A  N/A  N/A  N/A  0  N/A  N/A  0  N/A  0 
Vii Experiments
In this section, we conducted extensive experiments on a broad range of benchmarks shown in Table I to evaluate the performance of JFc by comparing with PPFc. We first presented the research questions.
Rq 2.
What is the performance of the joint framework (JFc) for producing the contrastive BC set compared with the postprocessing framework (PPFc)?
Viia Experimental Setups
ViiB Experimental Results
Table IV shows the overall performance of PPFc and JFc, including the running time of GA (‘GA t.’), the running time of the framework (‘t.’), and the number of meeting the BC termination condition (‘#T’). In JFc, records all BCs identified during the search.
From the column ‘’, the contrastive BCs obtained by JFc is slightly less than that obtained by PPFc. This is because JFc not only considers the contrasty in BC but also considers the BC termination condition where JFc searches for a set of contrastive BCs that is enough so that there is no BC in the domain properties and goals after avoiding these contrastive BCs. We also observe that the size of of JFc is much smaller than that of PPFc. Moreover, for JFc, the size of is close to that of . These observations show that JFc produces a strong search bias towards the BCs that are contrastive with the identified BCs.
In PPFc, the running time of GA is approximately the same as that of producing a set of contrastive BCs. And the running time of producing a set of contrastive BCs increases as the number of BCs identified by GA increases. In particular, in AAP, MP, and ATM, the running time of producing a set of contrastive BCs is about times that of GA. It indicates the drawback of PPFc, namely, the cost of producing a set of contrastive BCs is proportional to the number of BCs identified by a BC solver. It is foreseeable that the redundant BCs in will greatly reduce the efficiency of producing a set of contrastive BCs.
JFc deals with the drawback of PPFc, because JFc uses the identified contrastive BC for pruning during the search process, thereby avoiding searching for the redundant BCs. The shorter running time for meeting the BC termination condition confirms this conclusion. If JFc meets the BC termination condition, JFc will produce a set of contrastive BCs efficiently.
Particularly, in RP1 and ATM, JFc is times faster than PPFc. We also observe that if JFc does not meet the BC termination condition (only TCP), JFc is slower than PPFc. It is reasonable because JFc additionally checks the BC termination condition after finding a new BC.
Conclusively, JFc produces the search bias towards contrastive BCs. In addition, the efficiency of JFc is not limited to the number of BCs identified by a BC solver.
Viii Related Work
Inconsistency management, i.e., how to deal with inconsistencies in requirements, has also been the focus of several studies, in particular on the formal side. Besides the inconsistency management approaches based on the informal or semiformal methods, such as [15, 16, 19, 18], a series of formal approaches [11, 12, 14, 27] recently have been proposed, which only focus on logical inconsistency or ontology mismatch. Another related approach is proposed by Nuseibeh and Russo [28], which generates the conjunction of ground literals as an explanation for the unsatisfiable specification based on abduction reasoning. As for consistency checking methods, we have to mention the approach of Harel et al. [14], which identifies inconsistencies between two requirements represented as conditional scenarios. Moreover, the work [17, 23, 24] studied the reasoning about conflicts in requirements. In this paper, we focus on the situations that lead to goal divergences, which are nothing but weak inconsistencies.
Goalconflict analysis has been widely used as an abstraction for risk analysis in GORE. It is typically driven by the identifyassesscontrol cycle, aimed at identifying, assessing and resolving inconsistencies that may obstruct the satisfaction of the expected goals.
In identifying inconsistencies, we have to mention the work on obstacle analysis. An obstacle, first proposed in [33], is a particular goal conflict, which captures the situation that only one goal is inconsistent with the domain properties. Alrajeh et al. [2]
exploited the model checking technique to generate tracks that violate or satisfy the goals, and then to compute obstacles from these tracks based on the machine learning technique. Other approaches for obstacle analysis include
[3, 4, 5, 33]. Whereas, as obstacles only capture the inconsistency for single goals, these approaches fail to deal with the situation where multiple goals are conflicting.In this work, we focus on the other inconsistencies – boundary condition. Let us come back to the problem of identifying BCs. Existing approaches mainly categorize into constructbased approaches and searchbased approaches. For constructbased approaches, Van Lamsweerde et al. [31] proposed a patternbased approach which only returns a BC in a predefined limited form. Degiovanni et al. [10] exploited a tableauxbased approach that generates general BCs but only works on small specifications because tableaux are difficult to be constructed.
For the searchbased approach, Degiovanni et al. [9] presented a genetic algorithm which seeks for BCs and handles specifications that are beyond the scope of previous approaches. Moreover, Degiovanni et al. [9] first proposed the concept of generality to assess BCs. Their work filtered out the less general BCs to reduce the set of BCs. However, the generality is a coarsegrained assessment metric.
As the number of identified inconsistencies increases, the assessment stage and the resolution stage become very expensive and even impractical. Recently, the assessment stage in GORE has been widely discussed to prioritize inconsistencies to be resolved and suggest which goals to drive attention to for refinements. However, some of the work [33, 2, 3, 4, 5] assume that certain probabilistic information on the domain is provided and analyzes to simpler kinds of inconsistencies (obstacles).
In order to automatically assess BCs, Degiovanni et al. [8]
recently have proposed an automated approach to assess how likely conflict is, under an assumption that all events are equally likely. They estimated the likelihood of BCs by counting how many models satisfy a circumstance captured by a BC. However, the number of models cannot accurately indicate the likelihood of divergence, because not all the circumstances captured by a BC result in divergence. In this paper, we discovered the drawbacks and proposed a new metric to avoid evaluation mistakes for the likelihood.
For the resolution of conflicts, Murukannaiah et al. [26] resolved the conflicts among stakeholder goals of systemtobe based on the Analysis of Competing Hypotheses technique and argumentation patterns. Related works on conflict resolution also include [13] which calculates the personalized repairs for the conflicts of requirements with the principle of modelbased diagnosis.
However, these approaches presuppose that the conflicts have been already identified and our approach for boundary condition discovery provides a footstone for solving these problems. Let us recall Example 1. Letier et al. [21] resolved the BC by refining the first goal as: the pump is switched on when the water level is high and there is no methane. Formally, .
Ix Conclusion and Future Work
Providing a reasonable set of BCs for assessing and resolving divergences is of great significance both from an economical perspective and an impact on software quality. In this paper, we have proposed a new metric, contrasty, to deal with the drawbacks caused by the generality metric. Because BCs are ultimately used for resolving divergences, we argue that the identified BCs should help to assess and resolve divergences. The contrasty metric mainly distinguishes the difference between BCs from the point of resolving divergences. Experimental results have shown the advantage of contrasty metric, namely, it filters out the BCs capturing the same divergence. It helps to avoid costly reworks, i.e., assessing and resolving the same divergence captured by redundant BCs. In addition, we have designed a joint framework to improve the performance of the postprocessing framework.
Future work will extend our contrasty metric to the assessment stage and the resolution stage.
Acknowledgment
We thank Fangzhen Lin, Yongmei Liu, Jianwen Li, and Ximing Wen for discussion on the paper and anonymous referees for helpful comments.
References
 [1] (2009) Learning operational requirements from goal models. In ICSE, pp. 265–275. Cited by: §I.
 [2] (2012) Generating obstacle conditions for requirements completeness. In ICSE, pp. 705–715. Cited by: §VIII, §VIII.
 [3] (2012) A probabilistic framework for goaloriented risk analysis. In RE, pp. 201–210. Cited by: §IIA, §VIII, §VIII.
 [4] (2014) Integrating exception handling in goal models. In RE, pp. 43–52. Cited by: §VIII, §VIII.
 [5] (2015) Handling knowledge uncertainty in riskbased requirements engineering. In RE, pp. 106–115. Cited by: §VIII, §VIII.
 [6] (2014) The nuxmv symbolic model checker. In CAV, pp. 334–342. Cited by: §IIB.

[7]
(2014)
Automated goal operationalisation based on interpolation and sat solving
. In ICSE, pp. 129–139. Cited by: §I.  [8] (2018) Goalconflict likelihood assessment based on model counting. In ICSE, pp. 1125–1135. Cited by: §I, §IIA, §III, §III, 4th item, §VIII.
 [9] (2018) A genetic algorithm for goalconflict identification. In ASE, pp. 520–531. Cited by: §I, §I, §IIA, 1st item, 2nd item, 3rd item, §VA, §VI, 1st item, §VIII.
 [10] (2016) Goalconflict detection based on temporal satisfiability checking. In ASE, pp. 507–518. Cited by: §I, §VI, §VIII.
 [11] (2014) Detecting consistencies and inconsistencies of patternbased functional requirements. In FMICS, pp. 155–169. Cited by: §I, §VIII.
 [12] (2012) Agile requirements evolution via paraconsistent reasoning. In CAiSE, pp. 382–397. Cited by: §VIII.
 [13] (2009) Plausible repairs for inconsistent requirements. In IJCAI, pp. 791–796. Cited by: §VIII.
 [14] (2005) Synthesis revisited: generating statechart models from scenariobased requirements. In Formal Methods in Software and Systems Modeling, pp. 309–324. Cited by: §VIII.
 [15] (2002) Detection of conflicting functional requirements in a use casedriven approach. In ICSE, pp. 105–115. Cited by: §VIII.
 [16] (2014) A conceptual basis for inconsistency management in modelbased systems engineering. Procedia CIRP 21, pp. 52–57. Cited by: §VIII.
 [17] (2010) Techne: towards a new generation of requirements modeling languages with goals, preferences, and inconsistency handling. In RE, pp. 115–124. Cited by: §VIII.
 [18] (2011) Improving requirements quality using essential use case interaction patterns. In ICSE, pp. 531–540. Cited by: §VIII.
 [19] (2009) Automated software tool support for checking the inconsistency of requirements. In ASE, pp. 693–697. Cited by: §VIII.
 [20] (1983) CONIC: an integrated approach to distributed computer control systems. IET Computers & Digital Techniques 130 (1), pp. 1–10. Cited by: §III.
 [21] (2001) Reasoning about agents in goaloriented requirements engineering. Ph.D. Thesis, PhD thesis, Université catholique de Louvain. Cited by: §VIII.
 [22] (2015) SATbased explicit ltl reasoning. In HVC, pp. 209–224. Cited by: §IIB, 3rd item, 2nd item.
 [23] (2010) Ontologybased conflict analysis method in nonfunctional requirements. In ACISICIS, pp. 491–496. Cited by: §VIII.
 [24] (2010) Constructing a catalogue of conflicts among nonfunctional requirements. In ENASE, pp. 31–44. Cited by: §VIII.
 [25] (2002) Applying sat methods in unbounded symbolic model checking. In CAV, pp. 250–264. Cited by: §VI.
 [26] (2015) Resolving goal conflicts via argumentationbased analysis of competing hypotheses. In RE, pp. 156–165. Cited by: §VIII.
 [27] (2014) KBRE: a framework for knowledgebased requirements engineering. Software Quality Journal 22 (1), pp. 87–119. Cited by: §VIII.
 [28] (1999) Using abduction to evolve inconsistent requirements specification. Australasian J. of Inf. Systems 7 (1; SPI), pp. 118–130. Cited by: §VIII.
 [29] (1977) The temporal logic of programs. In Annual Symposium on Foundations of Computer Science, pp. 46–57. Cited by: §IIB.
 [30] (1985) The complexity of propositional linear temporal logics. J. ACM 32 (3), pp. 733–749. Cited by: §IIB.
 [31] (1998) Managing conflicts in goaldriven requirements engineering. IEEE Trans. Software Eng. 24 (11), pp. 908–926. Cited by: §I, §I, §IIA, §IIA, §IIA, §IIB, §III, §IVA, §VIII.
 [32] (1998) Integrating obstacles in goaldriven requirements engineering. In ICSE, pp. 53–62. Cited by: §IIA.
 [33] (2000) Handling obstacles in goaloriented requirements engineering. IEEE Trans. Software Eng. 26 (10), pp. 978–1005. Cited by: §VIII, §VIII.
 [34] (2009) Requirements engineering: from system goals to uml models to software. Vol. 10, Chichester, UK: John Wiley & Sons. Cited by: §I, §IIA.
Comments
There are no comments yet.