DirectDebug: Automated Testing and Debugging of Feature Models

by   Viet-Man Le, et al.
Siemens AG
TU Graz

Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts. Such models can be translated to a logical representation and thus allow different operations for quality assurance and other types of model property analysis. Specifically, complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact. In this paper, we introduce DirectDebug which is a direct diagnosis approach to the automated testing and debugging of variability models. The algorithm helps software engineers by supporting an automated identification of faulty constraints responsible for an unintended behavior of a variability model. This approach can significantly decrease development and maintenance efforts for such models.


page 1

page 2

page 3

page 4


Reverse Engineering Variability in an Industrial Product Line: Observations and Lessons Learned

Ideally, a variability model is a correct and complete representation of...

pyfml - a textual language for feature modeling

The Feature model is a typical approach to capture variability in a soft...

Consistency-based Merging of Variability Models

Globally operating enterprises selling large and complex products and se...

Fast Static Analyses of Software Product Lines – An Example With More Than 42,000 Metrics

Context: Software metrics, as one form of static analyses, is a commonly...

Understanding Conditional Compilation Through Integrated Representation of Variability and Source Code

The C preprocessor (CPP) is a standard tool for introducing variability ...

An Automated Testing Framework for Conversational Agents

Conversational agents are systems with a conversational interface that a...

A Variability-Aware Design Approach to the Data Analysis Modeling Process

The massive amount of current data has led to many different forms of da...

I Introduction

Feature models support the representation of variability and commonality properties of software artifacts [4, 10]. Applications thereof support users in deciding about which features should be included in a specific software instance. These models can be differentiated with regard to the used knowledge representation. So-called basic feature models [10] support the representation of hierarchies including cross-tree constraints such as excludes and requires relationships. Cardinality-based feature models [5] extend basic ones with cardinalities () of feature relationships. Finally, extended feature models [2] support the description of features with attributes.

The creation and evolution of feature models can be error-prone where cognitive overloads or missing domain knowledge are major reasons for models that do not reflect the intended variability properties [3, 13, 17]. Consequently, feature model development has to be pro-actively supported by intelligent debugging mechanisms that support the automated detection of faulty constraints responsible for the unexpected behavior of a feature model knowledge base.

The remainder of this paper is organized as follows. After a discussion of related work (Section II), we introduce an example of a feature model (Section III). In this context, we provide a formalization of feature models as constraint satisfaction problems (CSPs). This formalization is used as a basis for a discussion of concepts supporting the automated testing (Section IV) and debugging (Section V) of feature models. In Section VI, we report initial results of a performance analysis of our approach. The paper is concluded with Section VII.

Ii Related Work

The state-of-the-art in feature model analysis and related tasks can be summarized as follows.

Feature Model Analysis Operations. Analysis operations help to assure well-formedness properties in feature models. For example, it should be possible that each feature of a model can be included in at least one configuration, i.e., there should not exist a feature which is inactive in every possible configuration. For a detailed discussion of analysis operations for feature models we refer to Benavides et al. [4].

Conflict Detection. A conflict set (conflict) can be regarded as a subset that induces an inconsistency. For example, Zeller et al. [16] propose the Delta Debugging algorithm which supports the determination of relevant subsets in test cases responsible for the faulty behavior of a software component. Following a similar objective, Junker [9] introduces the QuickXPlain algorithm for the identification of subsets of constraints in a knowledge base responsible for an inconsistency (no solution can be found). Conflict sets are the basis for follow-up diagnosis operations that help to resolve these conflicts. More precisely, a diagnosis (hitting set [7, 11]) entails a set of elements of a knowledge base that have to be adapted or deleted to be able to resolve all conflicts, i.e., to restore consistency in the knowledge base. In contrast to conflict detection [9, 16], the approach presented in this paper focuses on diagnosis, i.e., conflict resolution.

Diagnosis of Inconsistent Models. A diagnosis can be regarded as a deletion subset that helps to restore consistency. An approach to the identification of diagnoses for inconsistent constraint sets is presented in Bakker et al. [1]. In this line of research, Trinidad et al. [13] show how to determine such diagnoses in the context of inconsistent feature models. In contrast to the work of Bakker et al. [1] and Trinidad et al. [13], the approach presented in this paper focuses on scenarios where test cases are used to induce an inconsistency in a knowledge base. Our approach is based on the idea of Felfernig et al. [6] who introduce a model-based diagnosis approach [11] to resolve conflicts in knowledge bases induced by test cases. Compared to Felfernig et al. [6], our approach is based on direct diagnosis (no conflict detection needed) which allows for an efficient determination of diagnoses [7].

Diagnosis for Reconfiguration. A reconfiguration can be regarded as a set of adaptations of feature settings in a changed configuration that are needed to restore consistency between the configuration and the corresponding feature model [8]. Using a constraint-based representation [14] of a feature model, White et al. [15] show how to apply the concepts of model-based diagnosis [11] to determine minimal sets of feature settings in existing configurations that need to be adapted in order to restore consistency with the feature model. In this context, feature models are assumed to be consistent. The work presented in this paper generalizes the concepts of Trinidad et al. [13] and White et al. [15] by allowing to take into account a set of test cases at the same time, i.e., a diagnosis represents an adaptation proposal that makes all of the given test cases consistent with the knowledge base.

Direct Diagnosis. The idea of direct diagnosis [7] is to significantly improve the performance of hitting set based diagnosis approaches [11] by supporting the calculation of diagnoses without the need of predetermining conflict sets. In this paper, we show how to support the automated testing and debugging of feature models using direct diagnosis [7].

The contributions of this paper are threefold. First, we show how to extend direct diagnosis [7] to support the automated testing and debugging of feature models. We integrate testing and diagnosis in a unified manner where test cases are considered as a central element of a diagnosis process. Second, we show how different types of test cases can be integrated into automated debugging processes. Third, we report initial results of a performance analysis of our diagnosis approach.

Iii Feature Model Semantics

Feature models are used to represent software variability properties by specifying features and their relationships in a hierarchical fashion [10]. Features are arranged in a hierarchical fashion with one specific root feature which has to be included in every configuration [4]. In a feature model, features are represented as nodes and relationships between features as corresponding edges. For an overview of approaches to the representation of feature models, we refer to Batory [2].

Feature Model Semantics. For the discussions in this paper, we follow the feature model representation of Benavides et al. [4] which includes four types of relationships (the hierarchical constraints mandatory, optional, alternative, or) and two types of cross tree constraints (requires and excludes). Feature models are variability models which can be formalized as a Constraint Satisfaction Problem (CSP) [14]. Each feature is related to the binary domain . The mentioned relationships and cross-tree constraints are represented as constraints on the CSP level.

Example Feature Model. Figure 1 depicts an example of a presumably faulty feature model (for details, see Section IV) from the domain of software services supporting the creation and management of surveys. For example, the feature ABtesting indicates whether a user wants to use AB testing functionalities when analyzing the results of a user study completed on the basis of questionnaires. Furthermore, the feature payment indicates the preferred payment mode where license represents a yearly payment and nolicense indicates a free license with an associated limited set of enabled features.

Fig. 1: An example of a (presumably faulty) survey software feature model.

Constraint Types. The following semantics of feature model constraints is based on Benavides et al. [4].

Mandatory: feature is denoted as mandatory if it is in a mandatory relationship with another feature . On the logical level, a mandatory relationship is defined in terms of an equivalence . In Figure 1, the feature Q&A is mandatory, i.e., it has to be part of every survey configuration. The same holds for payment and ABtesting where the latter should be considered faulty since, for example, it makes nolicense a dead feature and statistics a false optional [4].

Optional: if a feature is denoted as optional, this means that it may or may not be included in the case that feature is included. On the logical level, this property is formulated as implication: . In Figure 1, statistics is an optional feature connected to survey.

Alternative: exactly one feature out of has to be selected if feature has been selected. On the logical level, alternative relationships can be formalized as follows: . An example thereof is payment with the subfeatures license and nolicense.

Or: at least one feature out of a feature set has to be selected if feature has been selected. Relationships of type or can be formalized as follows: . An example of a feature is Q&A, the subfeatures are multiplechoice and singlechoice.

Requires: feature must be included in a configuration if feature is included. On the logical level, requires relationships can be defined as . An example of a requires relationship is ABtesting statistics.

Excludes: and must not be combined ( excludes feature and vice versa). On the logical level, excludes relationships can be defined as . An example of an excludes relationship is: .

Feature Models and Configuration Tasks. The task of finding a solution for a constraint satisfaction problem representing a feature model can be interpreted as a configuration task (see Definition 1).

Definition 1 (Configuration Task). A configuration task is defined by a feature set and a set of feature domains (). Furthermore, represents constraints restricting the possible solutions for a configuration task where is a set of user requirements and a set of feature model constraints.

In Definition 1, is an additional set of constraints which specifies which features should be included in a configuration.

Based on Definition 1, we introduce the concept of a configuration (solution) for a configuration task (Definition 2).

Definition 2 (Configuration). A feature model configuration for a given feature model configuration task is an assignment of all feature variables . is consistent if does not violate any constraint in .

CSP Representation of a Feature Model. A CSP-based representation of a feature model configuration task that can be generated from the model shown in Figure 1 is the following. In this context, is regarded as root constraint that is used to avoid the derivation of (irrelevant) empty configurations.

  • , , , , , , , ,

  • = , , = , , .., ,

  • , , , : , : , : , : , : , :

A consistent configuration that can be generated from our example feature model configuration task is the following:

  • {, , , , , , , , }

Due to faulty constraints in a feature model, in some situations the constraint solver (configurator) does not determine the intended solution(s). We now show how to identify a minimal set of constraints in a feature model that need to be adapted (or deleted) to restore consistency with regard to a predefined set of test cases which specify the intended behavior of a feature model. In other words, we are interested in constraints responsible for the faulty behavior of a knowledge base generated from a feature model. In the following, we will show how to automatically determine such constraint sets on the basis of the concepts of direct diagnosis [7].

Iv Testing Feature Models

In Figure 1, ABtesting is mandatory, however, this triggers a situation where each configuration has to include statistics and it is not possible to generate a configuration with nolicense payment. Reasons for faulty feature models can range from misinterpretations in domain knowledge communication, modeling errors, to outdated parts of a knowledge base.

Positive Test Cases. The set of positive test cases specifies the intended behavior of a knowledge base (feature model). Positive test cases are assumed to be existentially quantified, i.e., for each there should exist at least one configuration consistent with . Such test cases can be derived from already existing consistent complete or partial configurations (e.g., represented by a set of included and excluded features), from a set of analysis operations (e.g., dead features), or specified by domain experts interested in the correctness of the feature model. Without loss of generality, we restrict our working example to positive test cases specifying the intended behavior of a knowledge base (see Table I).

ID Test Case (Constraint)
TABLE I: Example positive test cases .

Avoiding nolicense to be a dead feature can be achieved by a positive test case (nolicense should be included at least one configuration). Similarly, an example of a partial survey configuration could be: . Another test case could require the support of configurations with payment being deactivated: . Finally, we introduce a test case that assures the existence of a configuration where singlechoice is not included: .

Note that test cases are not restricted to basic feature assignments but can also be implemented as general constraints, for example, we could specify as a test case. In the following, we will integrate in our discussion of a diagnosis algorithm that supports the automated testing and debugging of feature models.

Negative Test Cases. Negative test cases can be regarded as all-quantified constraints which specify an unintended behavior of a knowledge base. If is unexpectedly consistent with the knowledge base, it is integrated in negated form into the background knowledge (see Section V).

Generating Test Cases. ”Where do the test cases come from?” is an important question to be answered for applying the presented concepts in industrial settings. First, positive test cases can be derived from already completed and consistent feature model configurations. Second, positive as well as negative test cases can be specified by domain experts. Third, negative test cases can be derived from inconsistent configurations, for example, configurations that have been identified as faulty by domain experts. Finally, test cases could also be directly generated from a feature model knowledge base, for example, by using well-formedness criteria from feature model analysis operations [4] (e.g., to avoid dead features , a test case can be generated for each feature).

V Automated Debugging with DirectDebug

A diagnosis () includes exactly those constraints responsible for the faulty behavior of a feature model. Intuitively, constraints in have to be deleted or adapted to make the feature model consistent with (see Definition 3). Thus, a diagnosis helps a feature model engineer to focus diagnosis search. To enable such a functionality, test cases are needed.

Definition 3 (Diagnosis and Maximal Satisfiable Subset). Given a feature model with a set of feature model constraints and a set of positive test cases . A diagnosis is a set of feature model constraints () such that is consistent. is minimal iff such that consistent. A complement of (i.e., ) is denoted as Maximal Satisfiable Subset (MSS ).

Fig. 2: DirectDebug execution trace for , , and . Since is consistent, there is no need to further analyze . DirectDebug determines a maximal satisfiable subset MSS (), the corresponding diagnosis is (the MSS complement).

Example Diagnoses. The minimal diagnoses that can be derived in our working example are depicted in Table II. There are two options for restoring the consistency between the feature model (Figure 1) and : either delete/adapt the constraints in or do this with the constraints of . For example, suggests to take a look at (reconsider the relationship between payment and survey) and (reconsider the relationship between ABtesting and survey).

TABLE II: Example diagnoses and .

Diagnosis Approach. DirectDebug determines minimal diagnoses directly [7], i.e., without predetermining conflicts. It extends direct diagnosis with test cases (). DirectDebug is activated with the diagnosis candidates (constraints of considered as potentially faulty) and the background knowledge (constraints of assumed to be consistent and correct). The algorithm (Algorithm 1) determines an MSS , the corresponding minimal diagnosis is .

  if IsConsistent(then
  end if
  if  then
  end if
   ; ;
Algorithm 1

The constraint should not be diagnosable, since empty feature models should not be allowed (this would be the case if is part of a diagnosis). We assume to be part of the background knowledge which consists of constraints assumed to be correct, i.e., entails those constraints which should not be regarded as diagnosis candidates. Furthermore, includes those negative test cases in negated form which are consistent with . In our example, we assume for simplicity. Before starting DirectDebug, all in and have to be checked for consistency with .

If at least one positive test case induces an inconsistency in , DirectDebug is activated. Please note that only those test cases in are forwarded to DirectDebug which are inconsistent with . In our setting, the original set of positive test cases is reduced to since does not induce an inconsistency in , i.e., there exists a configuration (solution) for .

DirectDebug determines a maximal satisfiable subset MSS () (Definition 3) where (consideration set), (background knowledge), and represents a conjunction of negated negative test cases which are (unexpectedly) consistent with . DirectDebug is activated with test cases that are inconsistent with . can be used to focus diagnosis search on specific parts of a feature model. If should be diagnosed as a whole, and .

DirectDebug Consistency Checks. IsConsistent checks whether the constraints in are consistent with the test cases in . Obviously, test cases have to be checked individually, i.e., each activation of IsConsistent results in constraint solver activations (consistency checks). IsConsistent returns () if every test case in is consistent with , otherwise (). Only test cases inducing an inconsistency with are stored (returned) in (the remaining inconsistent positive test cases).

DirectDebug Execution. An execution trace of DirectDebug is shown in Figure 2. DirectDebug follows a divide & conquer approach. In each incarnation, it is analyzed which positive test cases remain inconsistent with . If just one constraint remains in the consideration set () and there still exists at least one test case with inconsistent(), then is considered a part of a diagnosis . If is consistent with the test cases in , is returned since no diagnosis elements can be found in .

Diagnosis Determination. DirectDebug returns a maximal satisfiable subset () (see Figure 2). To determine a minimal diagnosis , we have to determine the MSS-complement, i.e., which is .

Vi Performance Analysis

To evaluate the performance of DirectDebug, we synthesized test feature models (see Table III). For model synthesis, we applied the Betty generator [12] using the parameters #test positive cases (, with a 30% share of inconsistency-inducing test cases) and #constraints in (, where #variables = ) with . Since each test case check needs a constraint solver call111For evaluation purposes, we used, runtimes increase with an increasing number of test cases and constraints. Each entry in Table III represents the average DirectDebug (diagnosis) computing time after 3 repetitions.

10 20 50 100 500 1000
5 0.2 0.4 1.5 3.6 31.8 134.7
10 0.3 0.6 2.1 6.0 43.8 200.5
25 0.7 1.7 5.2 15.3 127.9 441.9
50 1.3 2.9 9.3 27.3 275.8 807.2
100 2.8 5.3 16.8 45.5 500.6 1463.7
250 8.7 15.1 37.9 105.1 1297.6 3290.1
500 27.0 27.7 68.1 182.1 2526.7 6429.0
TABLE III: Runtimes (in ) of DirectDebug with different constraint set () and test set () cardinalities () evaluated on an Intel Core i7 (6 cores) 2.60GHz with 16 GB of RAM.

Vii Conclusions

We have introduced an approach to the automated testing and debugging of feature models. Test cases can be used to induce conflicts in the knowledge base (representing a feature model). Using direct diagnosis, we show how consistency can be restored using a minimal diagnosis that includes all constraints responsible for the inconsistency. With this, we can pro-actively support feature model designers and can expect significant time savings in feature model development and evolution. Major issues for future work include the development of techniques for the automated generation of test cases taking into account coverage metrics that help to focus search on the most relevant parts of a knowledge base. Furthermore, we intend to include information from feature model quality metrics to better predict the most relevant diagnoses. Finally, we will continue our evaluations with real-world feature models.

Viii Data Availability

The test dataset used for evaluation purposes can be found here:


This work has been partially funded by the Horizon 2020 project OpenReq (732463), the Austrian Research Promotion Agency ParXCel project (880657), and the EU FEDER program MINECO project Ophelia (RTI2018-101204-B-C22).


  • [1] R. Bakker, F. Dikker, F. Tempelman, and P. Wogmim (1993) Diagnosing and solving over-determined constraint satisfaction problems. In IJCAI’93, pp. 276–281. Cited by: §II.
  • [2] D. Batory (2005) Feature Models, Grammars, and Propositional Formulas. In Software Product Lines Conference, H. Obbink and K. Pohl (Eds.), LNCS, Vol. 3714, pp. 7–20. Cited by: §I, §III.
  • [3] D. Benavides, A. Felfernig, J. Galindo, and F. Reinfrank (2013) Automated Analysis in Feature Modelling and Product Configuration. In ICSR’13, LNCS, Pisa, Italy, pp. 160–175. Cited by: §I.
  • [4] D. Benavides, S. Segura, and A. Ruiz-Cortes (2010) Automated analysis of feature models 20 years later: A literature review. Information Systems 35, pp. 615–636. Cited by: §I, §II, §III, §III, §III, §III, §IV.
  • [5] K. Czarnecki, S. Helsen, and U. Eisenecker (2005) Formalizing Cardinality-based Feature Models and their Specialization. SoftwareProcess: Improvement and Practice 10 (1), pp. 7–29. Cited by: §I.
  • [6] A. Felfernig, G. Friedrich, D. Jannach, and M. Stumptner (2004) Consistency-based diagnosis of configuration knowledge bases. Artificial Intelligence 152 (2), pp. 213 – 234. Cited by: §II.
  • [7] A. Felfernig, M. Schubert, and C. Zehentner (2012) An efficient diagnosis algorithm for inconsistent constraint sets. AI for Engineering Design, Analysis, and Manufacturing (AIEDAM) 26 (1), pp. 53–62. Cited by: §II, §II, §II, §II, §III, §V.
  • [8] A. Felfernig, R. Walter, J. Galindo, D. Benavides, S. Erdeniz, M. Atas, and S. Reiterer (2018) Anytime Diagnosis for Reconfiguration. Journal of Intelligent Information Systems 51 (1), pp. 161–182. Cited by: §II.
  • [9] U. Junker (2004) QuickXPlain: preferred explanations and relaxations for over-constrained problems. In AAAI 2004, pp. 167–172. Cited by: §II.
  • [10] K. Kang, S. Cohen, J. Hess, W. Novak, and S. Peterson (1990) Feature-oriented Domain Analysis (FODA) – Feasibility Study. TechnicalReport CMU – SEI-90-TR-21. Cited by: §I, §III.
  • [11] R. Reiter (1987) A theory of diagnosis from first principles. Artificial Intelligence 32 (1), pp. 57–95. Cited by: §II, §II, §II, §II.
  • [12] S. Segura, J. Galindo, D. Benavides, J. Parejo, and A. Ruiz-Cortés (2012) BeTTy: Benchmarking and Testing on the Automated Analysis of Feature Models. In Vamos’12, pp. 63–71. Cited by: §VI.
  • [13] P. Trinidad, D. Benavides, A. Durán, A. Ruiz-Cortés, and M. Toro (2008) Automated error analysis for the agilization of feature modeling. The Journal of Systems and Software 81, pp. 883–896. Cited by: §I, §II, §II.
  • [14] E. Tsang (1993) Foundations of constraint satisfaction. Acad. Press. Cited by: §II, §III.
  • [15] J. White, D. Benavides, D. Schmidt, P. Trinidad, B. Dougherty, and A. Ruiz-Cortes (2010) Automated diagnosis of feature model configurations. Journal of Systems and Software 83 (7), pp. 1094–1107. Cited by: §II.
  • [16] A. Zeller and R. Hildebrandt (2002) Simplifying and isolating failure-inducing input. IEEE Trans. on Software Eng. 28 (2), pp. 183–200. Cited by: §II.
  • [17] A. Zeller (2001) Automated debugging: are we close?. IEEE Computer 34 (11), pp. 26–31. Cited by: §I.