Towards Adversarial Configurations for Software Product Lines

05/30/2018 ∙ by Paul Temple, et al. ∙ 12

Ensuring that all supposedly valid configurations of a software product line (SPL) lead to well-formed and acceptable products is challenging since it is most of the time impractical to enumerate and test all individual products of an SPL. Machine learning classifiers have been recently used to predict the acceptability of products associated with unseen configurations. For some configurations, a tiny change in their feature values can make them pass from acceptable to non-acceptable regarding users' requirements and vice-versa. In this paper, we introduce the idea of leveraging these specific configurations and their positions in the feature space to improve the classifier and therefore the engineering of an SPL. Starting from a variability model, we propose to use Adversarial Machine Learning techniques to create new, adversarial configurations out of already known configurations by modifying their feature values. Using an industrial video generator we show how adversarial configurations can improve not only the classifier, but also the variability model, the variability implementation, and the testing oracle.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Software product lines (SPLs) promise to deliver custom products out of users’ configurations. Based on their specific needs, users select some configuration options (or features) that are combined at the implementation level for eventually deriving a desired software product. Real-world SPLs offer hundreds to thousands of configuration options through runtime parameters, conditional compilation directives, configuration files, or plugins (Pohl et al., 2005; Apel et al., 2013).

The abundance of options can be seen as a standing goal of an SPL, but it also challenges the engineering of SPLs. In particular, the configuration process can be tedious as combinations of options might not be functionally valid or lead to unacceptable performances (e.g., execution time) for a given product. It is extremely painful for an end-user, an integrator or a software developer to discover, late in the process (at compilation time or even worse at exploitation time), that her carefully chosen set of options is actually invalid or non-acceptable (for whatever definition of acceptability).

In fact, ensuring that all supposedly valid configurations of an SPL lead to well-formed and acceptable products has been a challenge for decades (Thaker et al., 2007; Metzger et al., 2007; Thüm et al., 2014). For instance, formal methods (such as model checking) and static program analysis have been developed (Bodden et al., 2013; Strüber et al., 2018; Boucher et al., 2010; Classen et al., 2011; ter Beek et al., 2016a; Nadi et al., 2014). Dynamic testing is another widely used alternative as it is sometimes the only way to reason about functional and quantitative properties of product of an SPL. However, it is most of the time impractical to enumerate, measure, and test all individual products of an SPL. As a result, machine learning (ML) techniques are more and more considered to predict the behavior of an SPL out of a (small) sample of configurations (Siegmund et al., [n. d.]; Sarkar et al., 2015; Guo et al., 2013; ter Beek et al., 2016b; Siegmund et al., 2013; Oh et al., 2017). In particular, ML classification techniques can be used to predict the acceptability of unseen configurations – without actually deriving the variants (Temple et al., 2017, 2016). A central problem then remains: the statistical ML algorithm can produce classification errors (by construction). Non-acceptable variants may still be generated out of supposedly valid configurations; or invalid configurations actually correspond to acceptable video variants.

Our intuition is that errors (if any) come from the proximity, in the feature space, between non-valid and valid configurations. This proximity may intertwine both acceptable and non-acceptable configurations, increasing the complexity of ML functions separating the two configurations classes (acceptable or non-acceptable). Prediction errors come from the fact that ML techniques are estimators and might not be able to cope with this complexity. Our idea is to exploit the estimator to find "blind spots" in the separating function and target these particular areas to better define the separating function. It will improve the ML classifier associated to an SPL and, thus, better capture the space of valid and acceptable configurations.

We propose to use Adversarial ML (AdvML) techniques which automatically create configurations that are specifically designed to lie in areas of the configuration space where the confidence in the ML decision is low (i.e., close to the boundary and thus where valid and non-valid configurations are close). To our knowledge, no works have used AdvML to reach the goal of improving an SPL.

The contributions of this paper can be summarized as follows: i) we motivate the problem with an industrial video generator; ii) we describe a conceptual framework for SPL engineering in which ML classifiers are central; iii) we introduce the idea of adversarial configurations and detail how AdvML techniques are suited for generating them; iv) we show that adversarial configurations can help to improve not only the classifier of an SPL, but also the variability model, the variability implementation, and the testing oracle of an SPL (e.g., an industrial video generator).

2. Motivating Case Study

We consider a representative SPL of our problem, an industrial video generator called MOTIV (more details can be found in (Galindo Duarte et al., 2014; Temple et al., 2016; Alférez et al., 2018)

). The goal of MOTIV is to produce synthetic videos out of a high-level, textual specification; such videos are then used to benchmark Computer Vision based systems under various conditions. Variability management is crucial to produce a diverse yet realistic set of video variants, in a controlled and automated way.

MOTIV is composed of a variability model that documents possible values of 80 configuration options. Each option has an impact of the visual characteristics on generated videos. There are Boolean options, categorical (an enumeration) options (e.g., for including fog or blur in a scene) and real-value options (e.g., to deal with the amount of dynamic or static noise). To realize variability, the MOTIV generator relies on Lua code that takes as input a configuration file and produces a custom video file (see Figure 1). A highly challenging problem of MOTIV is that, out of the possible configurations, some of the corresponding videos are not acceptable. For example, there is too much noise or blur in some of them; or additional objects, such as trees, obstruct the view to the scene making the processing too difficult and unrealistic, etc.

To overcome this limitation, our early attempt was to rely on ML classification techniques to predict the acceptability of unseen video variants. We used an automated procedure (i.e., a testing oracle) to compute and determine whether visual properties of a video were acceptable. Learned constraints were extracted and injected into the variability model to reduce its configuration space. Despite good accuracy and interpretable results, errors remain – supposedly valid configurations may still generate non-acceptable video variants or invalid configurations actually correspond to acceptable video variants. Thus, it is important to be sure that the classifier does not have "blind spots" and that the decisions it makes are as close as possible from the decision that the oracle would make. Said differently, we want to reduce the number of errors made by the classifier with regards to the oracle.

3. SPL and Machine Learning Classifier

The MOTIV case study is an instance of a more general problem. We propose a conceptual SPL framework to describe the problem and its entities. In particular, we show the central role of ML classifiers.

3.1. Basic SPL framework

Figure 1 depicts the different entities of the framework as we illustrate them on the MOTIV case.

Variability modeling. A variability model defines the configuration options of an SPL; various formalisms (e.g., attributed feature models, decision models) can be employed to structure and encode information (Berger et al., 2013; Benavides et al., 2010). A variability model typically defines domain values over each configuration options allowing to bound values that they can take. Moreover, as not all combinations of values are permitted, it is common to write additional constraints between options (e.g., mutual exclusions between two Boolean options).

A configuration is an assignment of values to every individual options. Because of constraints and domain values, the notions of valid and invalid configurations emerge. That is, some values and combinations of configuration options are accepted while others are rejected. A satisfiability solver (e.g., SAT, CSP, or SMT solver) is usually employed to check the validity of configurations and reason about the configuration space of a variability model. Such a reasoning procedure is usually sound and complete.

Variability implementation. Configurations are only an abstract representation of a variant (or a product); there is a need to shift from the problem space to the solution space and concretely realize the corresponding variants with actual code. Different implementation techniques can be used such as #ifdefs which give instructions to the compiler or runtime parameters given to programs. In the case of MOTIV, the Lua generator uses different parameters to execute a given configuration and produce a variant.

In some cases, configurations can lead to undesirable variants despite being valid. For instance, in the case of MOTIV, some video variants contained too much noise. The test oracle is a procedure to determine whether a variant is acceptable or not. In Figure 1, the oracle gives a label (green/acceptable or red/non-acceptable). As the number of variants can be large, it is desirable, as often as possible, to automate the procedure (e.g., with Unit test cases). In the case of MOTIV, we can hardly ask a human to visually assess all possible video variants. We implemented a C++ procedure for computing visual properties of a video. If a variant is considered non-acceptable by the oracle, then, there is a difference between the decision given by the solver within the problem space and the testing oracle within the solution space. Problems can occur in the transformation from problem space to solution space: in particular, the code can be buggy; the oracle can be hard to automate and thus introduce approximations and/or errors; the variability model might miss some constraints.

Figure 1. Software product line and ML classifier

3.2. Machine Learning (ML) Classifier

So far, the conceptual framework presented in Figure 1 is rather traditional. We now describe the role of ML classifiers.

Why ML classifiers are needed? A typical problem from SPL engineering is to ensure the integrity between the problem space and the solution space. That is, all valid configurations of the variability model in the problem space must be associated to an acceptable variant in the solution space. In the case of MOTIV, determining whether a video variant is acceptable can only be done after the derivation and through dynamic testing. Furthermore, it is most of the time impossible to execute all configurations to assess whether corresponding variants are acceptable or not. Beyond MOTIV, many SPLs are in this situation: the configuration space is huge and dynamic testing can only be done over a small sample. ML techniques are precisely here to generalize observations made over known configurations to never-seen-before configurations.

ML classifiers to predict the label of unseen configurations.

From a formal point of view: we consider a classification algorithm that assigns samples represented in a feature space to a label in the set of predefined classes . In MOTIV, only two classes are defined in : acceptable and non-acceptable. The classifier is trained on a dataset sampled from a variability model and constituted of a set of pairs (, ) of configurations defined in and associated labels. The classifier builds a separating function that can be later used to predict the class of previously unseen configurations represented in the feature space . A testing oracle is used to compute and associate labels to configurations in the training sample. When presenting unseen configurations, a ML classifier will hopefully predict correct labels without actually deriving corresponding variants.

Figure 2. Adversarial configurations (stars) are at the limit of the separating function learned by the ML classifier

Unfortunately, the separation can make prediction errors since the classifier is based on statistical approaches and a (small) training sample. Figure 2 illustrates a set of configurations (triangles and squares) in a 2D space (i.e., one dimension represents one configuration option) that is used to learn a separation (shown as the transition from the blue/left to the white/right area). The solid black line represents the target oracle that the classifier is supposed to fit. We can clearly see that the built separation is an approximation of the target oracle as: going away from the center of the image, the two functions diverge; two squares are already misclassified as being triangles. The algorithm approximates as it tries to mitigate the complexity of the function w.r.t. the number of errors it makes.

4. Using Adversarial ML

Principles. Our goal is to reduce the number of errors performed by the classifier. A first, simple method is to pick random configurations and, whenever a divergence happens between the decision made by the classifier and the oracle, the configuration is added to the training set with the label given by the oracle and build a new classifier. Going further: after a random configuration have been picked, the configuration is transformed into its associated variant. Tests on this variant are executed and finally the oracle can decide. In the end, because the configuration was chosen randomly, the classifier cannot be exploited at all. Its predictions must be check in order to know if it disagrees with the oracle to further retrain it.

Instead of randomly choosing configurations, we would like to guide their generation to increase chances of obtaining a classification error. Getting back to Figure 2, the notion of confidence in the prediction emerges as the two classification errors are made. As the two squares lie close to the separation and knowing that they are misclassified (thanks to their real labels), the classifier will give a low confidence in its decision. This piece of information can be exploited to guide the generation of new configurations towards similar areas where the confidence in the prediction is low.

This is somehow similar to Generative Adversarial Nets (GANs) idea (Goodfellow et al., 2014) as a classifier and adversarial configurations interact. However, instead of a generative model, we propose to use other AdvML techniques in order to find "blind spots" in the classifier (i.e., poorly explored areas in the feature space; areas with few known configurations that are possibly far from each other). Specifically targeting such areas in order to create new configurations and include them into the training set will rise the confidence given into predictions leading to fewer prediction errors.

Using evasion attack. Biggio et al. (Biggio et al., 2013a) present an AdvML technique, called evasion attack, that can be used after a classifier is trained. Regarding some classifier implementations, confidence values might not be directly retrieved and an estimate of the confidence response has to be calculated. Then, a gradient descent algorithm is used w.r.t. this estimation of the response in order to target directly area of low confidence. Finally, attack points (i.e., copy of actual known configurations) are modified following the gradient direction and become new configurations.

Adding these new configurations to the training set of the classifier and retrain it might help constrain the space of acceptable variants as the classifier will perform fewer prediction errors and will generally give a higher confidence in its predictions. It is important to recall that this method does not choose randomly configurations and it only exploits the classifier after it has been trained that does not involve the use of the oracle. Getting back to Figure 2, using an adversarial attack might create points lying around the points distribution (i.e., central part of the image). For instance, we might create adversarial configurations on the top and/or bottom parts of the image, in areas where we can observe strong divergence between the classifier and the oracle. The principle is shown by the configurations (representing initial and final positions of an adversarial configuration). Taking these new configurations into account by retraining the classifier, we force the reduction of the gap between the two functions, in turn, reducing prediction errors.

Along with the attack, (Biggio et al., 2013a) proposes an adversary model stating what an attacker knows about the system, how they can interact with it, etc. This model has been created for general purpose, however, in MOTIV, we have a complete control over the system. Furthermore, we do not want to attack per se, we rather want to improve the classifier using adversarial attacks. Thus, we conduct evasion attacks under perfect knowledge of the system (i.e., the learning algorithm, used training set and feature representation).

Using evasion attack on a video generator. Coming back at our video generator example presented, we apply AdvML and look at "adversarial" videos that are produced. To do so, we reused video configurations and labels presented in previous work (Temple et al., 2016).

The adversary’s goal is to consider one configuration at a time and manipulates it (i.e., modify its feature values) such that it becomes misclassified. In particular, an adversary can target a specific class for which the number of misclassification will increase.

We reimplemented the evasion attack presented in (Biggio et al., 2013a)

in Python using scikit-learn. As evasion attacks are based on gradient descent, the separation of the classifier needs to be derivable. Decision Trees (the classifier algorithm previously used in 

(Temple et al., 2016)

) cannot be derived so the classifier algorithm needs to be changed. We decided to use Support Vector Machines (SVMs) instead as they have already been studied under adversarial settings 

(Biggio et al., 2013b, 2014b; Biggio et al., 2013a; Biggio et al., 2012). Technically, MOTIV defines 80 configuration options among which some are categorical. As SVMs do not deal properly with this kind of features, we transforming them into a set of Boolean ones. That is, for a categorical feature proposing choices, we produce Boolean features instead.

Now, we instantiate the adversary model to our case: the adversary goal is to manipulate non-acceptable video configurations such that they become acceptable w.r.t. the classifier. We have perfect knowledge over the learning system and can directly manipulate configuration options values (which acts also as feature values). The number of adversarial configurations to create, the number of transformations (i.e., iterations) and their "speed" are parameters of the adversarial technique. The speed refers to the amount of displacement allowed at each iteration. It is a common issue in gradient based techniques as large displacements will get closer to the objective faster but might not converge while small displacements might lead to local optima in more iterations.

Preliminary results. We selected configurations randomly among the configurations that were considered non-acceptable by the oracle. For each of these configurations, we compute the gradient of the classifier function and move towards its direction times with a displacement step equals to . Once those new configurations have been generated, we add them to the training set with the label of the initial configuration (i.e., non-acceptable) and retrain the classifier such that it will take these new information into account. We repeated this process of selecting configurations, moving them and retraining the classifier times. Repeating the all process allows to select configurations added at previous step as new starting configurations to run the attack. Even if configurations generated in previous steps are selected, different gradient directions can be followed as the training set has been modified.This way, new areas of the configuration space can be explored and, in the end, reducing potential errors while globally increasing the confidence given in predictions.

Figure 2 exemplifies the creation of a new configuration following the evasion attack algorithm. A configuration has been chosen and copied (i.e., the star the closest to triangles is superimposed to the triangle configuration), then, based on the SVM separation, a gradient towards a low confidence area is computed The configuration is modified iteratively such that it follows the gradient. The successive modification are represented by the solid line until the algorithm stops. The final configuration is represented by a new star lying on top of red square configurations.

Figure 8. Examples of generated videos using evasion attack (a PDF reader is needed to appreciate the visual properties)

Figure 8 presents five images from five different adversarial videos that have been generated using evasion attack. We can notice that these videos have different visual properties which means that the attack is able to consider and leverage several features. Figure (a)a shows an image where there is fog, dynamic noise and the sky is very cloudy inducing less light and thus less contrast. The combination of fog, increasing the difficulty to identify objects in the background, and dynamic noise, a noise modifying different pixels in each frame of the video, makes it difficult to extract moving objects properly and, thus, to track them properly. On the other hand, Figure (b)b uses colors to make it more difficult for techniques matching colors to compare moving objects and models. In particular, this image shows unrealistic colors (the variability model did not constrained the color distribution). Figure (c)c uses a heavy fog and over-exposure in order to reduce contrast and make it difficult to distinguish anything in the background. Figure (d)d uses a combination of different specific properties presented in previous images. Figure (e)e changes illumination conditions to make it looks like dark night. As illumination is poor, the whole image can be compressed such that large homogeneous areas appear without introducing much errors in the decompression step. In the end, large blurred areas combined to poor illumination reduce contrasts and thus makes it more difficult to detour objects and recognize them. In addition, dynamic noise (which is superimposed to the image, hence not compressed) makes it even more difficult to distinguish anything, even close to the camera.

Capitalizing on adversarial videos. Our preliminary results show that evasion attack is able to combine values such that configurations will lie in new, unexplored areas. Because, they have not been explored, they cannot be constrained properly, resulting in poor confidence in the prediction of the classifier. A first attitude for developers of the video generator is to include those new configurations into the training set and re-train the classifier. Then, and following this idea presented in (Temple et al., 2016), new rules for classification can be extracted out of the new classifier and can be added as constraints into the variability model. It will forbid to select configurations that are likely to turn into non-acceptable variants. A second possible exploitation of adversarial configurations is to engineer a better testing oracle. Our manual review of videos indeed shows the inability of our previous automated procedure to handle such cases. In particular, we can try to break down the decision process of our testing oracle into a combination of simpler procedures: a first one dedicated to assess noise level, a second one regarding blur, an other one focused on color distribution, etc. Overall, adversarial configurations are helpful to improve the variability model and the testing oracle. Finally, adversarial configurations can highlight issues with variability implementation. However, in our experience, the Lua code of MOTIV was not in question.

5. Related Work

Our work aims to support quality assurance of SPLs through the use of ML techniques. Our contribution is at the crossroad of (adversarial) ML, constraint mining, variability modeling, and testing.

Use of ML and SPL. It has been observed that testing all configurations of an SPL is most of time impossible, due to the exponential number of configurations. ML techniques have been developed to ultimately reduce cost, time and energy of deriving and testing new configurations using inference mechanisms. For instance, authors in (Siegmund et al., [n. d.]; Sarkar et al., 2015; Guo et al., 2013; ter Beek et al., 2016b; Siegmund et al., 2013; Oh et al., 2017) used regression models to perform performance prediction of configurations that have not been generated yet. In (Temple et al., 2016), we proposed to use supervised ML to discover and retrieve constraints that were not originally expressed before in a variability model. We used decision trees to create a boundary between the configurations that should be discarded and the ones that are allowed. In this work, we build upon this work and follow a new research direction with SVM-based adversarial learning. Siegmund et al. (Siegmund et al., 2017) perform a review of ML approaches on variability models. They propose THOR, a tool for synthesizing realistic attributed variability models. An important issue in this line of research is to assess the robustness of ML on variability models. Our work specifically aims to improve ML classifiers of SPL.

In these bodies of work, none of them use adversarial ML neither the possible impact that adversarial configurations could have on the predictions. Our method introduces the use of evasion techniques (that is a specific adversarial ML technique) in order to specifically create configurations for "fooling" ML predictions. Such configurations could be used in order to reinforce ML boundaries and thus give more confidence in ML predictions. We also show the possible impacts of adversarial configurations over variability implementation and testing oracle (not only the variability model).

Adversarial ML is closely related to ML as it tries to better understand flaws and weaknesses of ML techniques. Adversarial ML can be seen as a field performing a security analysis of ML techniques. This field has known great advances since the early 2000’s with the breakthroughs of ML techniques in various domains. Typical scenarios in which adversarial learning is used are: network traffic monitoring, spam filtering, malware detection (Barreno et al., 2006; Biggio et al., 2013b, 2014b, 2014a; Biggio et al., 2013a; Biggio et al., 2012) and more recently autonomous cars and object recognitions (Zhang et al., 2018; Pei et al., 2017; Elsayed et al., 2018; Papernot et al., 2016; Sharif et al., 2016; Kurakin et al., 2016; Evtimov et al., 2017). In such works, authors suppose that a system uses ML in order to perform a classification task (e.g., differentiate emails as spams and non-spams) and some malicious people try to fool such classification system. These attackers can have knowledge on the system such as the dataset that have been used to train the ML classifier, the kind of ML technique that is used, the description of data, etc. Based on that, they plan an attack which consists in crafting a data point in the description space that the system will mis-classify. Recent works (Goodfellow et al., 2014) have proposed to use adversarial techniques to strengthen the classifier by specifically creating data that would induce such kind of misclassification. In this paper, we propose to use a similar approach. However, SPLs do not suffer from an adversarial context per se. The use of adversarial techniques is rather to strengthen the SPL (including variability model, implementation and testing oracle over products) while analyzing a small set of configurations. To our knowledge, no adversarial techniques have been used in the context of SPL or variability-intensive systems.

6. Conclusion

Machine learning techniques are more and more used in SPL engineering as they are able to predict whether a configuration (and its associated program variant) might be acceptable to end-users and their requirements. These techniques are based on statistical properties which can lead to prediction errors in areas where the confidence in the classification is low. We propose to bring Adversarial Machine Learning techniques in the balance. Exploiting knowledge over a previously trained classifier, they are able to produce so-called adversarial configurations (i.e., even before variants are created) in order to specifically target low confidence areas. These techniques can help detecting bugs in the mapping between configurations and actual program variants or discover additional (missing) constraints in variability models. Our preliminary experiments on an industrial video generator showed promising results. As future work, we plan to compare adversarial learning with traditional learning or sampling techniques (e.g., random, t-wise). Another research direction is to use adversarial learning for SPL regression (instead of classification) problem. In general, we want to apply the idea of generating adversarial configurations to SPLs that have large and complex configuration spaces.


  • (1)
  • Alférez et al. (2018) Mauricio Alférez, Mathieu Acher, José A Galindo, Benoit Baudry, and David Benavides. 2018. Modeling Variability in the Video Domain: Language and Experience Report. Software Quality Journal (Jan. 2018), 1–28.
  • Apel et al. (2013) Sven Apel, Don Batory, Christian Kästner, and Gunter Saake. 2013. Feature-Oriented Software Product Lines: Concepts and Implementation. Springer-Verlag.
  • Barreno et al. (2006) Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. 2006. Can machine learning be secure?. In Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 16–25.
  • Benavides et al. (2010) David Benavides, Sergio Segura, and Antonio Ruiz-Cortes. 2010. Automated Analysis of Feature Models 20 years Later: a Literature Review. Information Systems 35, 6 (2010).
  • Berger et al. (2013) Thorsten Berger, Ralf Rublack, Divya Nair, Joanne M. Atlee, Martin Becker, Krzysztof Czarnecki, and Andrzej Wasowski. 2013. A survey of variability modeling in industrial practice. In VaMoS’13.
  • Biggio et al. (2013a) Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013a. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387–402.
  • Biggio et al. (2013b) Battista Biggio, Luca Didaci, Giorgio Fumera, and Fabio Roli. 2013b. Poisoning attacks to compromise face templates. In Biometrics (ICB), 2013 International Conference on. IEEE, 1–7.
  • Biggio et al. (2014a) Battista Biggio, Giorgio Fumera, and Fabio Roli. 2014a. Pattern recognition systems under attack: Design issues and research challenges.

    International Journal of Pattern Recognition and Artificial Intelligence

    28, 07 (2014), 1460002.
  • Biggio et al. (2014b) Battista Biggio, Giorgio Fumera, and Fabio Roli. 2014b. Security evaluation of pattern classifiers under attack. IEEE transactions on knowledge and data engineering 26, 4 (2014), 984–996.
  • Biggio et al. (2012) Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).
  • Bodden et al. (2013) Eric Bodden, Társis Tolêdo, Márcio Ribeiro, Claus Brabrand, Paulo Borba, and Mira Mezini. 2013. SPL: statically analyzing software product lines in minutes instead of years. In ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’13, Seattle, WA, USA, June 16-19, 2013. 355–364.
  • Boucher et al. (2010) Quentin Boucher, Andreas Classen, Paul Faber, and Patrick Heymans. 2010. Introducing TVL, a Text-based Feature Modelling Language. In VaMoS’10. 159–162.
  • Classen et al. (2011) Andreas Classen, Quentin Boucher, and Patrick Heymans. 2011. A Text-based Approach to Feature Modelling: Syntax and Semantics of TVL. Science of Computer Programming, Special Issue on Software Evolution, Adaptability and Variability 76, 12 (2011), 1130–1143.
  • Elsayed et al. (2018) Gamaleldin F Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein. 2018. Adversarial Examples that Fool both Human and Computer Vision. arXiv preprint arXiv:1802.08195 (2018).
  • Evtimov et al. (2017) Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945 1 (2017).
  • Galindo Duarte et al. (2014) José Angel Galindo Duarte, Mauricio Alférez, Mathieu Acher, Benoit Baudry, and David Benavides. 2014. A Variability-Based Testing Approach for Synthesizing Video Sequences. In ISSTA ’14: International Symposium on Software Testing and Analysis. San José, California, United States.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672–2680.
  • Guo et al. (2013) Jianmei Guo, Krzysztof Czarnecki, Sven Apel, Norbert Siegmund, and Andrzej Wasowski. 2013. Variability-aware performance prediction: A statistical learning approach. In ASE.
  • Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
  • Metzger et al. (2007) Andreas Metzger, Klaus Pohl, Patrick Heymans, Pierre-Yves Schobbens, and Germain Saval. 2007. Disambiguating the Documentation of Variability in Software Product Lines: A Separation of Concerns, Formalization and Automated Analysis, In RE’07. RE’07, 243–253.
  • Nadi et al. (2014) Sarah Nadi, Thorsten Berger, Christian Kästner, and Krzysztof Czarnecki. 2014. Mining configuration constraints: static analyses and empirical results. In 36th International Conference on Software Engineering, ICSE ’14, Hyderabad, India - May 31 - June 07, 2014. 140–151.
  • Oh et al. (2017) Jeho Oh, Don S. Batory, Margaret Myers, and Norbert Siegmund. 2017. Finding near-optimal configurations in product lines by random sampling. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, Paderborn, Germany, September 4-8, 2017. 61–71.
  • Papernot et al. (2016) N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. 2016.

    The Limitations of Deep Learning in Adversarial Settings. In

    2016 IEEE European Symposium on Security and Privacy (EuroS P). 372–387.
  • Pei et al. (2017) Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP ’17). ACM, New York, NY, USA, 1–18.
  • Pohl et al. (2005) Klaus Pohl, Günter Böckle, and Frank J. van der Linden. 2005. Software Product Line Engineering: Foundations, Principles and Techniques. Springer-Verlag.
  • Sarkar et al. (2015) A. Sarkar, Jianmei Guo, N. Siegmund, S. Apel, and K. Czarnecki. 2015. Cost-Efficient Sampling for Performance Prediction of Configurable Systems (T). In ASE’15.
  • Sharif et al. (2016) Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. 2016.

    Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In

    Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1528–1540.
  • Siegmund et al. ([n. d.]) Norbert Siegmund, Alexander Grebhahn, Christian Kästner, and Sven Apel. [n. d.]. Performance-Influence Models for Highly Configurable Systems. In ESEC/FSE’15.
  • Siegmund et al. (2013) Norbert Siegmund, Marko RosenmüLler, Christian KäStner, Paolo G. Giarrusso, Sven Apel, and Sergiy S. Kolesnikov. 2013. Scalable Prediction of Non-functional Properties in Software Product Lines: Footprint and Memory Consumption. Inf. Softw. Technol. (2013).
  • Siegmund et al. (2017) Norbert Siegmund, Stefan Sobernig, and Sven Apel. 2017. Attributed Variability Models: Outside the Comfort Zone. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). ACM, New York, NY, USA, 268–278.
  • Strüber et al. (2018) Daniel Strüber, Julia Rubin, Thorsten Arendt, Marsha Chechik, Gabriele Taentzer, and Jennifer Plöger. 2018. Variability-based model transformation: formal foundation and application. Formal Asp. Comput. 30, 1 (2018), 133–162.
  • Temple et al. (2017) Paul Temple, Mathieu Acher, Jean-Marc Jézéquel, and Olivier Barais. 2017. Learning Contextual-Variability Models. IEEE Software 34, 6 (2017), 64–70.
  • Temple et al. (2016) Paul Temple, José Angel Galindo Duarte, Mathieu Acher, and Jean-Marc Jézéquel. 2016. Using Machine Learning to Infer Constraints for Product Lines. In Software Product Line Conference (SPLC). Beijing, China.
  • ter Beek et al. (2016a) Maurice H. ter Beek, Alessandro Fantechi, Stefania Gnesi, and Franco Mazzanti. 2016a. Modelling and analysing variability in product families: Model checking of modal transition systems with variability constraints. J. Log. Algebr. Meth. Program. 85, 2 (2016), 287–315.
  • ter Beek et al. (2016b) Maurice H. ter Beek, Alessandro Fantechi, Stefania Gnesi, and Laura Semini. 2016b. Variability-Based Design of Services for Smart Transportation Systems. In Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications - 7th International Symposium, ISoLA 2016, Imperial, Corfu, Greece, October 10-14, 2016, Proceedings, Part II. 465–481.
  • Thaker et al. (2007) Sahil Thaker, Don Batory, David Kitchin, and William Cook. 2007. Safe composition of product lines. In GPCE ’07. ACM, New York, NY, USA, 95–104.
  • Thüm et al. (2014) Thomas Thüm, Sven Apel, Christian Kästner, Ina Schaefer, and Gunter Saake. 2014. A Classification and Survey of Analysis Strategies for Software Product Lines. Comput. Surveys (2014).
  • Zhang et al. (2018) Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, and Sarfraz Khurshid. 2018. DeepRoad: GAN-based Metamorphic Autonomous Driving System Testing. arXiv preprint arXiv:1802.02295 (2018).