A Domain-Specific Language for Modeling and Analyzing Solution Spaces for Technology Roadmapping

09/24/2021
by   Alexander Breckel, et al.
Universität Ulm
0

The introduction of major innovations in industry requires a collaboration across the whole value chain. A common way to organize such a collaboration is the use of technology roadmaps, which act as an industry-wide long-term planning tool. Technology roadmaps are used to identify industry needs, estimate the availability of technological solutions, and identify the need for innovation in the future. Roadmaps are inherently both time-dependent and based on uncertain values, i.e., properties and structural components can change over time. Furthermore, roadmaps have to reason about alternative solutions as well as their key performance indicators. Current approaches for model-based engineering do not inherently support these aspects. We present a novel model-based approach treating those aspects as first-class citizens. To address the problem of missing support for time in the context of roadmap modeling, we introduce the concepts of a common global time, time-dependent properties, and time-dependent availability. This includes requirements, properties, and the structure of the model or its components as well. Furthermore, we support the specification and analysis of key performance indicators for alternative solutions. These concepts result in a continuous range of various valid models over time instead of a single valid model at a certain point of time. We present a graphical user interface to enable the user to efficiently create and analyze those models. We further show the semantics of the resulting model by a translation into a set of global constraints as well as how we solve the resulting constraint system. We report on the evaluation of these concepts and the Iris tool with domain experts from different companies in the automotive value chain based on the industrial case of a smart sensing electrical fuse.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 36

05/06/2021

Rethinking Sustainability Requirements: Drivers, Barriers and Impacts of Digitalisation from the Viewpoint of Experts

Requirements engineering (RE) is a key area to address sustainability co...
05/10/2022

The Road to Industry 4.0 and Beyond: A Communications-, Information-, and Operation Technology Collaboration Perspective

The fourth industrial revolution, i.e., Industry 4.0, is evolving all ar...
11/21/2017

Variance-based sensitivity analysis for time-dependent processes

The global sensitivity analysis of time-dependent processes requires his...
02/08/2021

Time-Dependent Wave-Structure Interaction Revisited: Thermo-piezoelectric Scatterers

In this paper, we are concerned with a time-dependent transmission probl...
03/07/2021

Teaching Model-based Requirements Engineering to Industry Professionals: An Experience Report

The use of conceptual models to foster requirements engineering has been...
12/07/2021

An Interactive Approach for Identifying Structure Definitions

Our ability to grasp and understand complex phenomena is essentially bas...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Innovation is a driving force behind the past and future development of society as it results in new products, new services, new production methods, new organizations, etc. (cf. Edison et al. (2013)). One can distinguish between:

  • disruptive or breakthrough innovation, which can alter the economic landscape,

  • sustaining innovation, which focuses on continuously improving existing capabilities,

  • and basic research, which lays the foundation for innovation (cf. Satell (2017)).

For sustaining innovation of individual products, product families, or even whole industry sectors, technology roadmaps are an established tool to systematically identify innovations, their influencing factors (as well as their dependencies) and to represent them over time Kerr and Phaal (2020); Park et al. (2020). Furthermore, technology roadmaps are used to support the strategic management of technologies, to define needs, predict availability of new and/or evolved technology, compare current technologies with emerging technologies, identify dependencies, and estimate possibilities in order to derive a coherent picture of the future innovation. In this context, technology roadmaps have a long tradition (the first formal work dates back to 1987 Willyard and McClees (1987)). They play an important role, for example, in the semiconductor industry, which can be seen for example in the initiatives regarding the “National Technology Roadmap for Semiconductors” (NACS), “International Technology Roadmap for Semiconductors” (ITRS) or new efforts regarding the “International Roadmap for Devices and Systems” (IRDS) Kerr and Phaal (2020).

In this article, we consider technology roadmaps of the second generation (i.e., “Emerging Technology Roadmaps”), whose focus, according to the classification of Letaba et al. Letaba et al. (2015), is particularly on predicting the development and commercialization of new emerging technologies and comparing them to current technologies.

The creation of technology roadmaps is currently missing dedicated tool support. For instance, Vatananen et al. Vatananan and Gerdsri (2012) already mention challenges and research opportunities for tools supporting market and technology analysis in technology roadmap development. Complementing this, Park et al. Park et al. (2020) request that future research investigate the digitalization aspect of roadmapping by using software and web tools to help organizing workshops, capturing and visualizing data graphically. Nevertheless, mostly text documents, drawings, and informal communication are used Rinne (2004), which is also in line with the feedback we receive from our industry partners. Only sometimes calculations are done in generic spreadsheet tools. Most research and existing work in the area of roadmaps (e.g. Garcia (1997); Holmes and A Ferrill (2006); Lee et al. (2009); Martin and Daim (2012)), focuses on the technology roadmapping process or the process for updating and reviewing technology roadmaps and not on how its content is defined.

We believe that Domain-Specific Languages (DSLs) van Deursen et al. (2000); Fowler (2011) for modeling of the above mentioned types of roadmaps can significantly improve the creation, update, review, and communication of roadmaps as well as their quality. Furthermore, a DSL for roadmaps needs to be accompanied by a supporting tool that provides analysis capabilities. Since the target group of people creating roadmaps are non-modeling experts, the accompanying tool has to specifically support the experts of the roadmap domain.

The contribution of this paper is a modeling language and an accompanying tool with a domain-specific user interface, which supports the creation and analysis of technology roadmaps and assists roadmap engineers in technology selection and communication of technology-related decisions. The language is inspired by SysML Object Management Group (OMG) (2017), yet implemented as a standalone domain-specific language in order to provide a more streamlined user experience. It supports the definition of an abstract architecture and extends it with an expression language which supports the formal specification of properties and requirements. The expression language specifically supports uncertainty and roadmap time as first-class citizens, e.g., a required technology is available in June 2027, processor performance is expected to increase according to a defined formula in the next 10 years, or legal requirements on fuel efficiency change at 3 different points in time in the future. Our approach does not only support modeling but also the analysis of the model. These analyses consist in solving all mathematical expressions for properties and requirements within the model while taking the dependencies between the hierarchical structure of the component model as well as the use of properties in multiple expressions into account. The results show value ranges of properties, the satisfiability of requirements, the temporal availability of modeled components, and which technological alternatives are chosen based on key performance indicators. Hence, these analyses enable the definition of trustworthy roadmaps which are based on complex mathematical estimations instead of ad-hoc rule-of-thumb estimations.

This is supported by a visualization of analysis results. The development of our modeling language and our tool implementation, called Iris: Interactive Roadmapping of Innovative Systems, is based on multiple demonstrators provided by our industrial partners from the automotive domain. A detailed description of the tool and, particularly, its interactive and collaboration features are not part of this paper as we focus on the language, its analysis and the visualization of the analysis results.

Therefore, we believe that our research is an important contribution in this area, as we digitalize an important aspect Vatananan and Gerdsri (2012) of technological roadmapping as requested by Park et al. Park et al. (2020) in the sense of a model-based approach to formally describe and analyze aspects of technology roadmaps, taking into account and involving various stakeholders along the value chain and thereby supporting communication between them. Our approach is thus clearly complementary and supportive to previous approaches and tools (e.g., see listing of tools in Vatananan and Gerdsri (2012)) in this area.

This paper presents a significantly extended version of our modeling language and tool compared to our previously published work Breckel et al. (2020). We added the concept of Key Performance Indicators (KPIs) to provide a way for roadmap engineers to define the quality or suitability of competing solution alternatives, see Section 4.1. We also define our approach of using interval arithmetic to deal with uncertain values in Section 4.4. Furthermore, we explain the approach to solve the global system of equations spanned by properties, requirements, and KPIs using repeated symbolic transformations. Thereafter, we describe in Section 5 the three transformation phases and the actual solving process of the constraint system. Finally, we additionally evaluated our modeling language and tool with industrial partners along the value chain and present the results of the case study in Section 7. The industrial experts value the modeling language and analysis capabilities and, particularly, that our language enables the various contributions of the partners along the value chain to be captured separately, that solution spaces created by alternative solutions and KPIs are created and analysed easily and quickly, and that Iris offers the possibility to evaluate even complex systems and edge solutions.

After an overview of related work in the next section, we introduce our running example in Section 3. This running example is a simplified version of a roadmap for an electronic fuse in the automotive domain. We present our modeling language in Section 4 and describe the solver we implemented to analyze time-dependent constraint systems with uncertain values in Section 5. In the following Section 6, we describe the user interface and visualization concepts of Iris. In addition, we present the results of the evaluation of our modeling language and the corresponding tool with industrial domain experts in Section 7 based on the smart sensing fuse presented in Section 3. We finish with a conclusion and an outlook on future work in Section 8.

2 Related Work

Technology roadmaps are established as a strategic management tool for technology planning, management, and selection Rinne (2004); Kerr and Phaal (2020); Phaal et al. (2004); Kostoff and Schaller (2001); Vatananan and Gerdsri (2012). They contribute to the planning of technology investment and development and are thus a key element for the success and competitiveness of organisations Knoll et al. (2018); Vatananan and Gerdsri (2012). A great number of publications address technology roadmap processes that focus on the development, updating, and maintenance of technology roadmaps Garcia (1997); Holmes and A Ferrill (2006); Lee et al. (2009); Martin and Daim (2012).

However, there is a lack of methods and specialized tools for the development and documentation of technology roadmaps. Vatananan et al. Vatananan and Gerdsri (2012) consider tools as an important component for the development and implementation of roadmaps. They categorize supporting tools for technology roadmap development according to their functionality into (a) market analysis tools, (b) technology analysis tools, and (c) supporting tools. The domain-specific language and the corresponding tool presented in this paper belong to the group of technology analysis tools, as predictions about technologies are mapped over time, based on their key performance indicators (KPIs).

Nowadays, technology roadmaps are often captured using Microsoft PowerPoint or Visio Rinne (2004) and are therefore visualized graphically or even as visual models. According to their graphical format, Phaal et al. Phaal et al. (2004) distinguish eight types of technology roadmaps: multiple layers, single layer, bars, tables, graphs, pictorial representations, flow charts, and text. A major disadvantage of such technology roadmaps is that their adjustment, maintenance and reuse of individual components is difficult. Technical analyses of certain KPIs based on technical models are not part of simple graphical representations, which have been created using Microsoft PowerPoint, for example. This makes verification and validation of technology objectives more difficult Knoll et al. (2018).

To overcome this, Knoll et al. Knoll et al. (2018) present a model-based approach where experts create technical models that enable the evaluation of potential product architectures according to a defined set of KPIs at different time horizons. Therefore, the roadmap architecture is described in an Object Process Methodology (OPM) diagram (cf. ISO 19450 ISO (2015)). This diagram includes a list of existing products and services, elements of technological relevance, primary operational functions, and roadmap related KPIs. The additional necessary data of the technology roadmap, such as definitions of KPIs, financial models, list of alternative and competitive technologies and their features, technological readiness level for each technology, statistical and mathematical models, are stored in a common database. How these information are linked to each other, how they are integrated into a technology roadmap, and how they are visualized is not described in detail by the authors. Furthermore, they do not describe a modeling language nor which analyses based on the technical models are possible and how they could be performed.

In contrast, our paper contributes to model-based roadmapping and focuses explicitly on visualization and interaction concepts that assist roadmap engineers. For this purpose, we specifically present the modeling language for the creation of a technology roadmap, which supports roadmap engineers in analyzing underlying technical models and their interrelationships.

Our definition of time differs from other usages of time commonly found in formal models, like in Time Petri Nets Cassez and Roux (2005) or in Timed Automata Alur and Dill (1994). These formalisms define time-dependent behaviour based on real-time clocks, in order to introduce ways to delay and synchronize the execution of models. Other approaches, like real-time extensions of the UML Object Management Group (OMG) (2019b), support the specification of real-time behaviour of embedded systems with a focus on performance and schedulability. In contrast to that, our work defines time-dependent structures to characterize the future availability of the modeled products and technologies over a time-frame of months, years, or decades.

Another important issue is uncertainty, as many factors cannot be accurately estimated. Lee et al. address this problem by introducing a Bayesian Belief Network (BBN) that structures multiple possible scenarios Lee et al. (2010). BBNs and their use for technology roadmapping are discussed in the context of flexibility and adaption to risk by Jeong et al. Jeong et al. (2021). In Iris, we use ternary logic (cf. Kleene (1938)) and interval arithmetic (cf. Moore (1966)) to model uncertainty. This allows for more convenient use with the other constructs of our domain-specific language, as BBNs would introduce another level of complexity. Currently, the interval arithmetic and ternary logic seem to be sufficient. However, if they are identified to be inadequate at some point in the future, we may extend our domain-specific language to include concepts like BBNs.

Our work is also related to the area of software and system architecture as our language is strongly based on the idea of hierarchical component models. Architecture Description Languages (ADLs) (cf. Medvidovic and Taylor (2000)) have been introduced more than two decades ago to define the architecture of systems more explicitly. Our hierarchical component model follows the concepts outlined for architectural description languages by Medvidovic and Tayler Medvidovic and Taylor (2000).

The Architecture Analysis and Design Language (AADL) is a prominent example of an architectural description language which can be extended by so-called annexes. Annexes exist for example for time, e.g., Mkaouar et al. (2020). However, those time extensions deal (similar to the aforementioned UML profile Object Management Group (OMG) (2019b)) with time aspects relevant for real-time scheduling. Bao et al. present an AADL annex for the specification of uncertainty Bao et al. (2017)

. While their approach enables modeling of uncertainty w.r.t. to a diverse set of probability distributions, they also target detailed performance and safety requirements during product development. Furthermore, those approaches – in contrast to our approach – do not address the specification and selection of alternative solutions based on user-defined KPIs.

The research area of architecture optimization specifically supports the optimization w.r.t. to user-defined KPIs. Aleti et al. classify many different architecture optimization approaches in their survey 

Aleti et al. (2013). However, these approaches focus on optimizing a concrete product during development time and do not allow specifying and analyzing future solution spaces.

Finally, software product lines Apel et al. (2013) are related to our approach as they enable the specification of different alternative features. There exist approaches to select features based on objective functions similar to KPIs in our approach, e.g., Hierons et al. (2016). However, those approaches also focus on concrete products and do not support modeling and analyzing future solution spaces.

3 Running Example

In order to illustrate the developed concepts and visualizations presented over the course of this paper, we will use a simplified example called Fuse as a running example.

Every electrical device in modern cars is protected from overcurrent by so called blade fuses. In case of overcurrent, a small metal strip inside the fuse melts, the electrical circuit is interrupted and the connected device is switched off. To switch the device on again, the fuse has to be physically replaced. Depending on the layout of a car’s wiring harness and the particular melted fuse, more than just the actual faulty loads might be switched off in case of an overcurrent.

One major research goal of the automotive industry is to develop autonomously driving vehicles. In case of an overcurrent, such a vehicle has to be stopped safely. Since blade fuses can not melt in a controlled way, in a worst-case scenario the central processing unit which controls and stops the car could be switched off as a side-effect of a faulty electric device. In such a case, the car would be out of control.

As an alternative to blade fuses, a reversible smart sensing fuse is not only able to detect and protect against overcurrents but also selectively switch off electrical consumers and switch them back on again without being replaced. If the vehicle’s electrical system is adapted accordingly and each consumer is individually protected, in case of an overcurrent, the car could be stopped safely.

Automotive Original Equipment Manufacturers (OEMs) and Tiers (direct or indirect suppliers of the OEM) have been researching semiconductor-based reversible smart sensing fuses for more than two decades Graf et al. (1996); Fuisting et al. (2014). A big challenge was and still is to design a semiconductor switch that supports high currents, has an adequate avalanche rating, and beneficial feature set. The design involves several design decisions and key technologies with varying market availability.

Figure 1: Screenshot of a model of two alternative fuse technologies (BlFuse and EFuse) within the context of an autonomously driving vehicle ()

Figure 1 shows a screenshot of the Fuse example modeled with Iris. The example is arranged in three horizontal imaginary layers. The layers themselves have no semantic meaning and are for layout purposes only. On the top layer the abstract functionality AutonomousDriving, which should be available in the future, is modeled as a block depending on the availability of two features: the fuse’s max load current must be greater or equal than the path load worst case operating current. Additionally, a selective overcurrent detection and consumer disabling must be available. This dependence in terms of availability is expressed by requirements which are a special model elements referencing another block’s availability or holding a Boolean expression. There are different levels of availability as illustrated in Figure 1.

To evaluate, if and when in the future it is possible to develop an autonomously driving car, in the second layer, a Vehicle is modeled as a generic block and referenced by the availability requirements of PowerSupply and ErrorDetection. The vehicle holds a property TotalCurrent whose value is the sum of all sub-blocks’ Currents. The meaning of properties is twofold: on the one hand, they state that something has a property to be considered in the course of the modeling goal. On the other hand, their values are computed by the solver, e.g., TotalCurrent’s value is evaluated to .

There are two electrical consumers: Headlights with a static Current of and ProcessingUnits responsible for running software that is needed for the autonomous driving of the car. In the example, the processing units are responsible for two types of software: the DetectionSoftware that is able to detect obstacles, road signs, and the environment around the car, and the Autopilot software that drives the autonomous car. Both software parts require a specific amount of processing power in TFLOPS. Since future cars will drive more and more autonomously, the DetectionSoftware will become more and more important and extensive. To address this fact, the DetectionSoftware will require more TFLOPS over time as modeled by a linearly growing value of in 2021 up to in 2035. In consequence, also the ProcessingUnits’ overall PowerConsumption and Current grow accordingly. In our running example, PowerConsumption is a rough estimation based on present GPUs, which offer with a power consumption of .

The Vehicle also holds a Fuse which defines a solution space and acts as an interface for concrete fuse implementations. Therefore, the block Fuse defines various properties that a concrete fuse must have: a Watchdog to individually detect overcurrents, a supported MaxLoadCurrent, and a BatteryVoltage. While BatteryVoltage is fixed to , the remaining properties need to be set by the specific implementation. Besides the properties, Fuse defines a requirement that MaxLoadCurrent must be greater or equal than the Vehicle’s calculated TotalCurrent, which in turn – as already mentioned – grows over time.

In the third layer there are two concrete fuses which implement the interface defined by Fuse. An implementation expresses an isA relationship. Since BlFuse and EFuse implement the interface of Fuse, they derive all properties, constraints, and values (if set) defined by Fuse. BlFuse represents a blade fuse with a high MaxLoadCurrent but without a Watchdog, and EFuse represents a smart sensing fuse and has a Watchdog but a lower MaxLoadCurrent. Both fuse implementations are also displayed next to the Fuse block which enables manual comparison of different implementations of Fuse. Iris is also capable of automatically selecting the best implementation of arbitrary alternative implementations. In this example, a fuse with a watchdog would be seen as the best fuse. Accordingly, Fuse contains a KPI that references the existence of an implementation’s watchdog. However, at the current point in time (T=Jan2030) BlFuse is selected as indicated in the Fuse block by the green arrow above BlFuse, because EFuse has a watchdog but violates the requirement of its MaxLoadCurrent, which must be equal or greater than the vehicle’s total current.

4 A Modeling Language for Roadmapping

In this section we present our modeling language in four steps:

  1. First, we introduce a domain-specific modeling language that is capable of representing the structural composition of a technological system relevant for technology roadmapping.

  2. Second, we introduce the embedded textual language used to specify property values and requirement conditions.

  3. Third, we extend both languages to support time-dependent properties and the time-dependent availability of structural elements.

  4. And last, we extend both languages to support interval arithmetic and ternary logic as they are required to model uncertainty, common in real-world applications.

4.1 Structural Modeling Language

Our modeling language was developed in continuous coordination with industry partners and was inspired by SysML block definition and internal block diagrams (Object Management Group (OMG), 2017, pp. 35–43). It is intended to represent structural, hierarchical models that describe the technologies relevant for a technology roadmap, together with their properties, requirements, and KPIs. We used SysML as inspiration due to its wide-spread use in systems modeling. However, we chose a vastly reduced and slightly adapted set of modeling elements due to our target user group of domain experts with little to no experience with modeling formalisms. Our goal in designing the language was to define a minimal set of modeling elements that, on the one hand, is capable of representing all concepts relevant for roadmapping, while on the other hand being comprehensible enough to be applied by domain experts. Instead of defining a standalone language, it would have also been possible to extend SysML with KPIs, time-dependence, and solution spaces. However, this would cause our language to inherit several technical aspects of SysML that are not needed due to our reduced feature set, and would be detrimental to the overall user experience. For example, the distinction in SysML between types, occurrences and instances would require model elements to be defined redundantly. Also, the overall usage of parametric diagrams to specify mathematical dependencies would add significant modeling overhead when using our embedded expression language.

An important aspect of using formal models for technology roadmapping is the ability to model solution alternatives for a technological need. A required functionality, like the protection from overcurrent in an electrical circuit, may be achieved in different ways, e.g., through a blade fuse or a smart sensing fuse. Therefore, our language supports solution alternatives.

Model

Block

name: string

Property

name: string formula: Expression

Requirement

condition: Expression

KPI

metric: Expression

*

1

*

*

*

1

*

*

1

*

1
Figure 2: Metamodel of our language in UML notation

The metamodel we use is shown in Figure 2 and consists of the following modeling elements:

A model is the main modeling unit and contains blocks. The whole of Figure 1 depicts a model, augmented with additional analysis results.

Blocks are the basic hierarchical structuring element of the model. In Figure 1, every box with a blue titlebar represents a block, like AutonomousDriving or Vehicle. Each block has a name and contains a list of properties, requirements, and kpis. Blocks can be used to represent arbitrary concepts from the modeled domain, such as components, stakeholders, or services. Given a model , we say that denotes the set of all blocks within , whereas is the set of all direct children of a block .

Blocks can also be related to each other in an interface-implementation association. If block is an implementation of block , we consider block as a possible solution for . In this case, block takes on the role of an interface and block the role of an implementation. Note that a single block can also simultaneously act as an interface and an implementation by creating multiple such associations. In Figure 1, Fuse is an interface implemented by the two blocks BlFuse and EFuse.

We define the set of direct implementations of an interface as:

Conversely, we define the set of all blocks within a model that take on the role of an interface as:

And we further define the set of all implementations of an interface as the transitive closure over :

Interfaces and their respective implementations define points of variability in the model that can be used to capture the structural diversity and inherent uncertainty of roadmapping. Each interface forms what we call a solution space, as will be explained on page 4.1, and allows the selection of one of its implementations to act as the default solution. Often, a single block can be part of multiple different solution spaces across a model. To facilitate this we allow each block to implement multiple interfaces. Restricting this to a single interface would require the implementation block to be duplicated instead, which would in turn cause unwanted redundancy. In case of the diamond problem, i.e. if a block transitively implements the block through two different paths and , we resolve conflicting overwrites between and by applying them in lexical order. This order is also discernible in the provided user interface within each block, where interfaces are listed top to bottom.

The children and impls relations are orthogonal to each other, which brings up the question whether implementations should inherit child blocks from their interfaces. Inheriting child blocks would allow interfaces to propagate a common substructure downwards into all their implementations. This in turn could be combined with an overwriting mechanism in order to allow interfaces to specify required sub-components which should be provided by all implementations. We have experimented with such semantics, and have found the added expressiveness to not be worth the significant increase in complexity. Instead, we have decided against inheriting child blocks. This decision in turn allows a convenient modeling pattern: If the implementations to an interface are only used locally, and not across the whole model, they can be grouped as children of its interface. This reduces the distance between related modeling elements and can improve readability.

Properties contain a name and a formula that describes the value of the property. They are displayed in Figure 1 as individual entries below a block title. The embedded textual expression language of formulas supports various operations and references to other properties, and will be described in more detail in Section 4.2. Furthermore, expressions can also reference a global time parameter in order to define time-dependant values, as will be described in Section 4.3.

We define the set of all explicit properties of a block  as .

In case of an interface-implementation association between two blocks, the implementation implicitly inherits all properties of the interface. This is, for example, the case for the property BatteryVoltage of the block BlFuse, which is inherited from Fuse. If a block defines a property with the same name as an inherited property, the local definition overrides the inherited one. We define the set of all properties (including the inherited ones) of a block in a model as:

Requirements contain an expression that describes a boolean condition. They are displayed in Figure 1 as individual rows within a block, and they use the same expression language as properties. Therefore, requirements can reference properties and other aspects of the model, and can perform calculations with their values. The purpose of a requirement is to specify the conditions under which the containing block becomes available. Similar to properties, requirements are also inherited along an interface-implementation association. This is, for example, the case for the requirement MaxLoadCurrent >= Vehicle.TotalCurrent of the block BlFuse, which is inherited from Fuse.

The inheritance of a requirement can not be prevented, and an inherited requirement can not be further adjusted. Therefore, inherited requirements can not be relaxed by making the condition less strict, or by removing it completely. The inheriting block can, however, add additional explicit requirements and thereby make the overall requirements for this block stricter.

We define the set of all explicit requirements within a block as , and the set of all requirements in a model as:

KPIs (Key Performance Indicators) contain an expression that represents a metric. The running example in Figure 1 contains such a KPI as the first row num(Watchdog) in block Fuse. The purpose of a KPI is to define a metric for the quality or fitness of competing solution alternatives. While such a metric might also be useful for roadmap engineers as a form of documentation, or as a data source for strategic decisions, their main purpose is to allow our analysis to perform an automated selection of the best solution alternative. Since expressions and therefore KPIs can reference properties and the global time parameter, such an automated selection can depend not only on the values of properties of the containing and other blocks, but also on different points in time within the roadmap. Section 6.3 will describe this aspect in more detail.

Unlike properties and requirements, KPIs are not inherited. This ensures that each interface can define its own relevant set of KPIs, without being affected by selection criteria used in other parts of the model. An example for this would be a block, like BlFuse, that is part of multiple different solution spaces within a model. One solution space could represent a required fuse for the engine controller, whereas a different part of the model could require a separate fuse for the headlights. Both solution spaces might consider similar solution alternatives, yet have different selection criteria based on their individual needs. In such a case, inheriting the KPIs would add both sets of selection criteria to the block BlFuse, even though those criteria are specific to the context of each solution space.


All properties, requirements, and KPIs within a model implicitly define a global system of equations that can be solved using, for example, an SMT Solver (Satisfiability modulo theories, cf. De Moura and Bjørner (2011)), or our own solving approach based on symbolic transformations, as will be described in Section 5. This system of equations depends on the configuration of blocks active for the currently selected point in time, which in turn depends on the values of properties, requirements and KPIs. In order to resolve these interconnected dependencies, our analysis consists in constructing and solving the global system of equations.

We say that for a given property or requirement , denotes the resulting value range of after solving the equations. In Figure 1, the solver results of properties and KPIs are displayed to the right of each formula, whereas the results of requirements are displayed as background colors. Section 6 contains a description of different forms of visualization in the model, and the semantics of the color schema.

The combination of interface-implementation associations and requirements allows us to define the notion of availability of a block. We say that a block (or better: the technological concept represented by a block) is available if all of the following criteria are true:

  • If the block has any implementations, at least one of the implementations must be available.

  • If the block contains any children, all children must be available.

  • If the block contains any requirements, all requirements must evaluate to true.

The availability of a block is therefore defined as:

Note that the automated selection mechanism based on KPIs prefers available implementations. If at least one implementation is available, the mechanism will never select an unavailable one. Therefore, in order to determine the availability of a block, it is not necessary to check the availability of the automatically selected solution alternative. Instead we can check that at least one implementation is available.

For a given block representing an interface, we call the set of all available implementations the solution space of the interface. In our running example, the solution space of the block Fuse contains only the implementation BlFuse, since EFuse is not available due to an unsatisfied requirement.

For a given model , we call a mapping that selects an available implementation from the solution space of each interface a configuration of . Since interfaces act as points of structural variability in our modeling language, such a configuration represents a valid way to “build” or “instantiate” the modeled system with actual implementations. The set of all possible configurations defines a model-wide solution space. This model-wide solution space is important for roadmap engineers, as it contains all valid configurations of blocks within the model that satisfy the specified requirements.

In order to ease working with such solution spaces, our implementation offers an automated selection mechanism based on the defined KPIs: For each interface in a model, all available implementations are ranked according to the sum of all KPIs defined in the interface, and the best alternative is selected as the implementation. If none of the implementations are available, however, it can still be important for roadmap engineers to analyze the ranking between those implementations. The unavailability might be caused by erroneous requirements, or the implementations might each fail to satisfy different requirements. These situations, where none of the existing solutions are fully satisfying, provide important insights into the tradeoffs between different alternatives, and can help in identifying future needs. Therefore, our automated selection mechanism also proceeds in cases where all implementations are unavailable. If no KPIs are defined for an interface, then no automated selection is performed for that interface, and the solution space is left open.

In our example, the user specified the KPI num(Watchdog), which evaluates to or depending on the value of Watchdog. The function call to num ensures that boolean values are converted to numbers, which is required by our automated selection mechanism in order to compare different KPI values. If both alternative implementations were available, the system would prefer the one with a watchdog. However, since EFuse is not available in the example, BlFuse is selected.

The automated selection mechanism based on KPIs is all the more important when introducing time-dependence into the model, as the user would otherwise have to make a manual selection of implementations at each point in time. In this case, the model-wide solution space also becomes time-dependent. If the user specifies a point in time (via the time slider that we provide in our user interface), and manually selects solution alternatives for all interfaces that were not automatically selected via KPIs, then the resulting visible model represents a single configuration. If, instead, one or more solution spaces are left undecided, then the visible model represents a set of configurations. Furthermore, if the user wants to inspect the set(s) of configurations defined represented by the model across all points in time, then this is possible using several visualizations in the user interface, like plots showing values over time, or a chart overview that shows the time-dependent availabilities of all blocks within a model (c.f. Section 6).

4.2 Expression Language

We use an embedded textual expression language for specifying properties, requirements, and KPIs. The language is expressive enough to represent complex formulas, while still being simple enough to be written and understood by domain experts. Since Microsoft’s office suite, especially Excel, is widely used in industry as reported by our industrial partners, the syntax and functionality of our language is similar to Excel formulas. Our aim is to enable domain experts to easily apply their existing skills to our roadmap modeling and analysis approach.

This intended simplicity led us to avoid existing languages like OCL Object Management Group (OMG) (2014a) due to their complexity and the effort required to implement the additional usability features that will be explained in Section 6. The language provides a set of literals, including literals to specify SI-units, identifiers, arithmetic and relational operators, conditional expressions, aggregation expressions, and function calls. C.1 describes the individual language elements in more detail. Furthermore, Section 4.3.1 will extend the language to add more time-related constructs.

The combination of our structural modeling languages and the embedded expression language allows us to create a model representing the interconnected composition of technologies relevant for technology roadmapping. Such a model, however, does not yet support any variability between different points in time.

4.3 Time Dependence

In our context, the term time refers to an arbitrary point in time in the future, when the model is supposed to be relevant. A concrete point in time is specified by a date. Since we use modeling primarily for roadmapping of new technologies, looking at a period of 10 – 30 years in the future, we use a granularity of years and months for our dates.

This definition of the concept of time differs from the clock-based realtime usually found in formal models, like in Time Petri Nets Cassez and Roux (2005) or Timed Automata Alur and Dill (1994), or in realtime extensions of the UML Object Management Group (OMG) (2019b). As both definitions of time can play a role in our models, e.g., when specifying requirements for the reaction time of a system in the year 2030, we use the terms roadmap time and runtime to distinguish between the two implied meanings of the term.

We further define two forms of time-dependence:

  1. Local Time Dependence: the value of a property changes over time. This can be useful, for example, to model the predicted increase of power consumption of car components over time, or the decreasing legal thresholds on CO emissions over time.

  2. Structural Time Dependence: structural aspects of the model change over time. At different points in time, the structural composition of the model can be different, containing different sets of blocks. This can be useful, for example, to model emerging technologies, like the availability of reversible smart fuses.

In the following we describe how our modeling language has been extended to support both forms of time-dependence.

4.3.1 Local Time Dependence

The values of all properties in our model are defined by their respective formulas. In order to express time dependence within properties, we add an implicit parameter to each property, representing an arbitrary but fixed point in time. This parameter can be referenced within the formula to make the resulting value of the property time dependent, by using the literal T. This literal T is, for example, used in Figure 1 in the property TFLOPS of the block DetectionSoftware.

When solving the whole model, we instantiate all properties within the model with a global user-selected value for . In our implementation, this value is specified by the user setting a time-slider in the UI, as will be shown in Section 6.3.

We further extend our expression language with additional capabilities to work with time. For this, we make time a first-class citizen of our expression language by defining two new types of values, in addition to boolean values and numbers:

  • A date specifies a single point in time.

  • A duration specifies the length of a time interval.

Both value types can be created by respective literals in our embedded expression language:

  • Date literals specify a month name and year in the form MonthYear, e.g., Feb2035.

  • Duration expressions specify a number of months in the forms months(n) and years(n). For example, months(24) and years(2) both specify a duration of 24 months.

We distinguish between these two types of time values in order to provide different textual representations for the user (e.g. 2021 years 7 months for a duration vs. Aug2021 for the corresponding date), and in order to define unambiguous arithmetic operations. For example, multiplying a date by the number 2 is ambiguous, as it depends on the neutral element of the applied calendar system (i.e., year 0). Multiplying a duration by the number 2, however, is well-defined, as it simply doubles the number of months specified by the duration. C.2 lists all arithmetic and relational operators that we support on dates and durations.

In addition to these changes to the expression language, we also extend the meaning of identifiers. Each identifier, referencing either a property or a block availability in the model, can now be suffixed by an additional time argument, e.g., TFLOPS(Jan2030). If no argument is specified, the implicit argument (T) is used. This makes it possible to query the value of properties or the availability of blocks at a time different from the current .

4.3.2 Structural Time Dependence

Time dependent properties do not affect the structure of the model, i.e., the defined set of blocks and their associations. In the context of roadmapping, however, we want to specify different configurations of blocks at different points in time, using a single model. This allows us to express, for example, that we predict new technologies to become available in the future, or that existing technologies become unavailable due to changes in legislation.

In order to allow the structure to change depending on the selected point in time, we make use of the concept of availability. The availability of a block is based on the satisfiability of its requirements. By adding the same parameter  to requirements that we also added to properties, we can in consequence vary the availability of a block over time. In our running example in Figure 1, this is used in the block EFuse, where the requirement at the bottom states that the whole block is unavailable before January 2022 plus an additional time frame of 12 to 36 months (see Subsection 4.4 for more details). As another example, by adding a requirement A(T - years(2)) to a block B, we cause block B to become available two years after block A becomes available. With this requirement, the availability of block B at any point in time T, e.g. at T = Jan2022, is equal to the availability of block A two years prior, i.e. at Jan2022 - years(2) = Jan2020.

The combination of time-dependent properties, requirements, KPIs, and availabilities leads not only to a time-dependent model, but also to a time-dependent solution space. When instantiating a model with two different values of , all properties, requirements and KPIs within the model can evaluate to different values. This in turn can lead to different outcomes for all block-availabilities, and therefore different solution spaces for each interface. The resulting two model slices can therefore differ significantly from each other, both in terms of local values, and global structure.

4.4 Dealing with uncertainty

When working with time-dependent data and models from real-world applications, many values are not known exactly in advance, yet constrained to lie in a certain range. Thus the modeling language provides interval-semantics as a way of expressing those uncertainties.

For instance, in our running example, the development of the EFuse is planned to start in January 2022. Yet, because of some unknown factors, the effective time-to-market is not known beforehand and estimated to require between 12 and 36 months. In our modeling language, we can express this with the expression T >= Jan2022 + [months(12)..months(36)].

With [months(12)..months(36)] we define an interval with a lower and an upper bound, represented by [lower..upper]. This interval defines the mathematical value range between the given bounds as a closed interval in : . Because we use those intervals to represent uncertainties, the expression x = [lower..upper] is not to be mistaken as x being the interval . For intervals are used to represent value ranges, the expression x = [lower..upper] is to be interpreted as a constraint for x to lie in the given range.

In order to adequately support intervals, all basic arithmetic operations are implemented as follows. All of them are inclusion isotonic Moore (1966), meaning that for the intervals , , and all interval operations satisfy the requirement:

Albeit this is easy to ensure for some operations, the division is more difficult as the divisor may contain . Consult C.3 for more details.

Alongside the arithmetic operations, intersections are implemented as well. If , the resulting interval is defined as:

Otherwise (that is, if the intervals are disjoint) the resulting interval is empty. Therefore evaluates to while evaluates to the empty interval. Since intervals denote a range of values, the empty interval will be tainted to flag an invalid value: A property that cannot evaluate to any value is invalid and therefore all of its usages are invalid too. To propagate the erroneous value, any operation performed with an empty interval will result in an empty interval (and therefore a tainted value) again.

Constraints like [a..b] >= c are implemented as follows:

Furthermore, the utility functions (, , …) have been extended to support intervals. With PI being a constant for the mathematical constant , expressions like min(sin([-PI..PI]/3), 5, [-1..3]) evaluate to the value range [..] and non-monotonic functions (like and ) are not just evaluated based on corner cases, since that could possibly miss local minima/maxima. Instead we perform an analytically correct computation equivalent to taking the union of all possible outputs when using the whole input domain.


Having intervals only representing the value range of a property, comparisons like x == 3 are not just bound to boolean logic. If the closest value range for x is the interval [..], x == 3 should yield neither true nor false. Therefore the boolean logic is extended by the uncertainty value maybe, which can be viewed as the Interval [false..true]. All three important operations (not, and, or) are defined with the following table, similar to Kleene’s strong logic of indeterminancy:

A B !A A & B A | B
false maybe true false maybe
maybe false maybe false maybe
true maybe false maybe true
maybe true maybe true
maybe maybe maybe maybe

As a result, the comparison [1..3] = [0..2] evaluates to maybe, but [-2m..1m] <= [5m..6m] evaluates to true because even the maximum possible value on the left hand side (being 1 meter) is lower than the minimum possible value on the right hand side (being 5 meters). More precisely, the equal-comparison of two intervals is true if and only if both intervals contain exactly one value (e.g. [2m..2m]) and this value is the same for both intervals (if the value differs, the comparison yields false). If at least one of the intervals contains more than one number (e.g. [0m..12m]), the comparison yields maybe if both intervals overlap and false if they are disjoint.

5 Generating and solving the constraint system

A model as defined through the presented meta-model includes several distinct semantic aspects, which need to be taken into account when deciding on an evaluation technique:

  • Inheritance relations implicitly define additional model elements.

  • Hierarchical nesting between blocks affects availabilities and identifier scoping.

  • Requirements affect availabilities.

  • Property values depend on the active solution alternatives.

  • Automated choice between solution alternatives is determined by comparing different KPIs and availabilities.

In order to avoid dealing with all these aspects at once, we first reduce the complexity of the model in the following phases by transforming it into intermediate forms of decreasing complexity:

  1. The model is expanded by explicitly creating properties and requirements that are defined implicitly by inheritance relations. After this phase, all relevant model elements exist explicitly.

  2. Identifiers in expressions are resolved based on hierarchical relations. After this phase, the hierarchy within a model can be ignored apart from its effects on availabilities.

  3. The model is lowered into a flat constraint system. This phase abstracts away the remaining aspects, producing a form that can be processed without having to take into account availabilities and solution spaces.

The constraint system is then solved using repeated symbolic transformations, which result in a simplified constraint system containing value bounds for the various model elements. Afterwards, the resulting value bounds are displayed in the user interface, as will be shown in Section 6. The following sections explain in more detail the three transformation phases, as well as the actual solving process of the constraint system.

5.1 Expanding inheritance

This phase works by walking along all inheritance relations in the model and carrying over properties and requirements. Topological sorting is used to correctly handle transitive inheritance. In case of properties, explicitly defined properties overwrite inherited ones with the same name. If a property name is inherited through multiple inheritance relations, then the deterministic lexical order of relations is used to resolve conflicts. Requirements are inherited unconditionally. KPIs, however, are not inherited. Since KPIs are used to assist an automated strategy selecting between different solution alternatives, such KPIs should stay local to the context where a solution alternative is applied, and not where it is defined.

In our running example, the inheritance relation between the blocks Fuse and BlFuse results in the creation of the inherited requirement MaxLoadCurrent >= Vehicle.TotalCurrent, and the property BatteryVoltage. The other two properties MaxLoadCurrent and Watchdog are overwritten in the block BlFuse. The KPI num(Watchdog) is not inherited.

The expansion of inheritance relations creates independent copies of model elements, which is important, because inherited properties often result in a different value than their ancestors due to referenced properties being overwritten.

5.2 Resolving identifiers

After the expansion of inheritance relations, all identifiers used in expressions throughout the model are resolved to their respective model elements. This resolving is performed based on static scoping rules determined by the block hierarchy: names are first looked up locally within the respective block, and, if not found, recursively within its hierarchical ancestors.

Almost all expressions are resolved within the scope of their containing block. The only exception is for KPI expressions, which need to be resolved separately within the scope of each solution alternative. This ensures that the expression measures the quality or fitness of the solution alternative, and not of the interface itself.

Special care needs to be taken for aggregations like SUM(expr). These expressions compute a function over all direct descendant blocks of a common parent block. In order to “resolve” the hierarchical relationships, we replace aggregations with their expanded form. Therefore the implementation of each aggregation does not perform the computation itself, but instead performs the transformation that lowers the aggregation into a semantically equivalent expanded expression.

In our running example, the aggregation SUM(Current) in the property Vehicle.TotalCurrent is replaced with the expression Headlights.Current + ProcessingUnits.Current, which in turn gets resolved to the actual properties.

After this phase, the hierarchical relationships within a model can be ignored, apart from its effects on availabilities.

5.3 Generating the constraint system

Finally, the expanded and resolved model is converted into a flat constraint system. This constraint system consists of a set of true statements, i.e. expressions in our expression language that must always evaluate to true.

Given the capabilities of our expression language, we can already use it to express such statements over property values, like for example A.P(T) = 1, meaning that the value of property P of block A at time T is . However, this is not yet sufficient to generate a constraint system that captures the full semantics of our modeling language. For example, we cannot express a constraint stating that a block becomes unavailable if one of its requirements is not satisfied because we have no means to reference model aspects like availabilities. To address this, we extend our expression language to allow the following types of references:

  • References of the form A.?requirementN(T), which represent the value of the N-th requirement of block A at time T.

  • References of the form A.?kpiN(B, T), which represent the value of the N-th KPI of block A, when evaluated in the context of the derived block B at time T.

  • References of the form A.?availability(T), which represent the availability of block A at time T.

  • References of the form A.?replacement(T), which represent the index of the active solution alternative of block A at time T, or the value -1 if block A has no solution alternatives.

The prefix ? ensures that the new references to not clash syntactically with user defined block or property names.

In order to create the initial constraint system, we iterate over all blocks, properties, requirements, and KPIs in the resolved model and generate one or more constraints for each of these model elements as follows:

For each property of block A with expression <expr> we generate a constraint that ensures that the value of A.P at time T is equal to <expr>. In the simple case where A has no solution alternatives, i.e. , this could be achieved with the constraint A.P(T) = <expr>. However, if block A does have solution alternatives , the value of the property should instead be equal to the value of the corresponding property B_k.P in the selected solution alternative. In order to handle both cases uniformly, we generate the following constraint:

1     ...
2     else if (A.?replacement(T) = N) then B_N.P(T)
3     else <expr>)

This expression maps the property A.P to one of the properties B_k.P, with k determined by the value of A.?replacement.

Note, that if the expression <expr> refers to T, then it uses the time parameter provided to A.P instead of the global time. This makes it possible to refer to the property A.P at different points in time by writing, for example, A.P(Jan2030).

For each N-th requirement of block A with expression <expr> we generate the constraint A.?requirementN(T) = <expr>. The selection process based on A.?replacement, that we used for properties, is not necessary for requirements, since requirements cannot be overwritten in derived blocks.

For each N-th KPI of block A with expression <expr>, we generate a constraint for each solution alternative of block A. The constraint has the form A.?kpiN(B_i, T) = <expr>, where <expr> is resolved within the scope of B_i. Therefore, the value of A.?kpiN(B_i, T) is equal to the value of the KPI metric evaluated for the solution alternative B_i.

For each block A we generate two additional constraints to handle availabilities and the automated selection of solution alternatives based on KPIs.

In order to handle the availability of block A, we generate the following constraint:

1  (     if (A.?replacement(T) = 1) then B_1.?availability(T)
2        ...
3   else if (A.?replacement(T) = N) then B_N.?availability(T)
4   else true)
5  & (A.?requirement1(T) & ... & A.?requirementM(T))
6  & (C_1.?availability(T) & ... & C_K.?availability(T))

Here, denote the solution alternatives of block A, is the count of requirements in block A, and are the direct children of block A. The constraint states that block A is available iff the selected solution alternative is available, and all requirements are satisfied, and all of its direct descendants are available. If block A has no implementations, then the first part of the constraint collapses to true, and the availability is solely determined by its requirements and descendants.

In order to realize the automated selection of solution alternatives based on KPIs, we add the following constraint for each block A:

1  if B_1.?availability(T)
2     then (A.?kpi1(B_1, T) + ... + A.?kpiM(B_1, T))
3     else -inf,
4  ...,
5  if B_N.?availability(T)
6     then (A.?kpi1(B_N, T)) + ... + A.?kpiM(B_N, T))
7     else -inf
8)

Again, denote the solution alternatives of block A. The parameter represents the count of KPIs in block A. The function index_of_max returns the 1-based index of its (first) largest argument. We use this to determine the index of the solution alternative which results in the largest KPI value. The if-conditions make sure that only currently available solution alternatives are considered, unless none are available. In that case, the result is simply the first solution alternative. If block A has no solution alternatives, then we add the constraint A.?replacement(T) = -1 instead.

A shows a listing of the whole constraint system that is generated for our running example in Figure 1.

The presented rules to generate constraints allow us to fully encode the automated process to select the best solution alternative based on availability and KPIs in the constraint system. The advantage of such an approach is, that after generating the constraint system, further analysis steps can ignore the semantics of hierarchy, inheritance, availabilities, and KPIs. Instead, the analysis can focus on solving the plain constraint system, which greatly reduces complexity.

Each constraint in the generated constraint system is formulated as an equality ( =), with a reference on the left hand side, and an expression on the right hand side. These equalities cannot be solved by processing them like variable assignments in programming languages, where the result of the right hand side is assigned to the variable on the left. In general there exists no strict order of evaluation, since the references used in multiple constraints can form complex dependency graphs, or even cycles. The following section describes our approach to solve such constraint systems.

5.4 Solving the constraint system

Given the target user group of roadmap engineers, we have two objectives in solving the constraint system:

  1. We want to determine the set of possible values (including ranges due to uncertainties) for all properties, requirements, availabilities, and solution alternative selections within the model.

  2. We want to provide tracing information for all results in order to assist roadmap engineers to identify problems and opportunities within the model.

Determining the set of all possible value ranges could be achieved with existing tools like SMT solvers (Satisfiability modulo theories, cf. De Moura and Bjørner (2011)). Sprey, Sundermann et al. Sprey et al. (2020), for instance, have applied SMT solvers in a similar setting to determine attribute ranges for extended feature trees Benavides et al. (2010) by performing individual optimization analyses to find the lowerst and highest possible values. Extended feature trees share some similarities with our model language by allowing features in a feature tree to have attributes with assigned values. However, adapting such an SMT-based approach to our modeling language semantics would require support for interval arithmetic in order to solve constraint systems in the presence of uncertain value ranges (cf. Neumaier (1991)). Furthermore, the findings by Sprey and Sundermann Sprey and Sundermann (2018) suggest that the performance of performing multiple SMT-based optimization analyses per model version is not sufficient for an interactive workflow.

Our second objective, the need to generate tracing information for the user, is difficult to achieve with existing solutions, as this would usually require performing additional solver analyses to extract, for example, a minimal (un)satisfiable core. Another possible approach could be to extract tracing information from proof objects, which some SMT solvers like Z3 de Moura and Bjørner (2008b) are able to produce de Moura and Bjørner (2008a). Such proof objects describe the sequence of steps and transformations necessary to produce the solver result. However, the generated proof objects are intended for external verification with theorem provers, and it is unclear whether it would be possible to extract our required tracing information through post processing. This led us to implement and integrate our own solution for solving the generated constraint system based on symbolic transformations and the interval arithmetic as presented in Section 4.4.

In order to find all value ranges, we simplify the constraint system by repeatedly applying symbol transformations on all constraints until a fix-point is reached. This approach does not always terminate, e.g. in cases where the solution continuously approaches a fix-point with increasing precision without ever reaching it, or in cases where our transformations generate increasingly large constraints. To accommodate this, our solver operates in rounds and sets a threshold (default: 50) on the number of rounds. If the threshold is reached, the bounds inferred up to that point are not wrong, but form a safe super-set, because the solver starts with infinitely large bounds and continuously narrows them down with each inferred constraint. This means that if not all of the information contained within the initial constraint system can be processed, the resulting bounds might contain values that do not actually solve the constraint system. However, the bounds will never exclude valid results.

The goal of all transformations is to achieve a normal form, where constraints are =-relations with a single reference on the left side, and an interval on the right side. The interval on the right can then be displayed in the user interface.

To achieve this, we employ different types of symbolic transformations commonly found in theorem provers Nipkow et al. (2002), like folding constant expressions, propagating inferred value ranges, and using algebraic properties like associativity and commutativity to create new opportunities for simplification. Examples for different classes of transformations that we employ are given in B.

All symbolic transformations preserve the correctness of the constraint system, which means that all value assignments that were valid before a transformation are also valid afterwards. However, in some cases transformations are allowed to introduce a relaxation, meaning that after applying the transformation the resulting constraint system might allow more satisfying assignments than before. This happens, for example, when merging resulting value ranges of interval operations. The resulting constraint system therefore represents a conservative over-approximation of the model. This is also the case if the final constraint system contains constraints that did not reach normal form, and therefore do not further restrict the inferred bounds.

Figure 3: Screenshot of solver traces for the computation of Fuse.MaxLoadCurrent highlighted in red and green.

Every application of a transformation rule annotates the resulting expression with the constraints utilized in the transformation. After solving the constraint system, we follow these trace annotations recursively to collect all relevant initial constraints for each resulting value range. This is similar to program slicing Tip (1995) in the area of program language analysis and enables us to visualize the information through the user interface by clicking on a displayed value. This can be seen in Figure 3. It enables the user to understand which syntax elements have influenced the clicked value.

In this visualization, a trace is shown for the value 50A of the property Fuse.MaxLoadCurrent. The tracing information helps users identify all model elements that contribute to an inferred value. When solving the constraint system, each newly created constraint is annotated with the set of all previous constraints and model elements that have contributed to its creation. Every time a constraint reaches normal form and affects the bounds of the property Fuse.MaxLoadCurrent, the resulting bounds are annotated with the set of transitive dependencies of . This set of transitive dependencies includes all model elements that contribute, in one way or another, to the resulting value 50A. Since the value 50A depends on the automatic selection of BlFuse over EFuse, and this selection in turn depends on the satisfiability of the requirements in BlFuse and EFuse, the traces also contain all model elements necessary for the computation of Vehicle.TotalCurrent.

The traces are highlighted in the user interface as green and red boxes. Elements highlighted in green changed their value from T - 1 to T, whereas elements highlighted in red remained constant. This can be used by roadmap engineers to identify critical property changes that caused an unwanted value in the model, by setting the current time T to the first point in time where the unwanted value occurred, and then looking only at those model elements highlighted in green.

In terms of performance and scalability, our approach provides quick results () for all examples used during our tests and the evaluation reported in Section 7, and supports an interactive workflow during typical modeling activities. However, the solver approach does not scale well to larger models without adaptation, where long dependency cycles between expressions would cause large intermediate expressions. Nonetheless, we have found the performance of our approach to be sufficient for our purposes while it additionally provides the added value of the traceability.

Finally, in terms of functional correctness, we employ a combination of manual and automated system and integration tests to verify the implementation of our solver. To this end, our current test suite consists of about 950 automated tests of varying granularity that reach a statement/branch coverage of / within the ternary logic and interval arithmetic, / within our symbolic transformations, and / within the surrounding solver. On top of that, all code is fully typed with TypeScript operating in strict-mode, which ensures a level of type-safety that would otherwise be hard to achieve in JavaScript.

6 Visualization of time dependent modeling concepts

Technical models contained in a model-based technology roadmap can be calculated and evaluated by a computer, but roadmapping and the technology selection based on it, as well as the resulting projection to a company’s strategy, is done by human (domain) experts. Hence, various challenges arise to make a time dependent model practical and readable for human users.

Most of our visualization techniques focus on the explainability of time-dependent aspects of the model. The variability of the model along the time dimension creates several challenges for an intuitive visual representation. In the following, we describe each challenge and our proposed solution. All screenshots shown in this paper are taken from our prototype Iris which is available online.111https://genial.uni-ulm.de/jss2021/

6.1 Target User Group

Through our industrial partners we identified some typical users of a model-based roadmapping tool: In addition to systems engineers, technology scouts and technical experts can also use such a tool. All these various users have in common a deep knowledge of their own domain but might not be modeling experts. Therefore, we decided to choose a graphical syntax for our modeling language in terms of taking up existing well-known graphical languages, such as SysML Object Management Group (OMG) (2017) and lowering the entry level for non-modeling experts. Furthermore, we implemented the expression syntax as introduced in Section 4.2, inspired by spreadsheet tools.

In the following, we refer to all different types of users mentioned before and those interacting with the technology roadmap as roadmap engineer.

Figure 4: Reference highlighting supports tracing. Currently highlighted are explicit references of requirement MaxLoadCurrent >= Vehicle.TotalCurrent.

6.2 Reference Highlighting

All properties and requirements of a block can either be a simple value such as (see block Headlights in Figure 1), or a formula containing references to other properties. To support easy tracing of references, we added color-based highlighting of referenced properties when a roadmap engineer clicks on a formula, as shown for the references which are part of the requirement formula MaxLoadCurrent >= Vehicle.TotalCurrent in Figure 4.

6.3 Manual Time Selection

Prospective roadmapping is an activity that looks into the future. Consequently, the model at a certain point in time, not at the present point, is of interest. A trivial approach that could be used with existing modeling tools is shown in Figure 5: the availability of block A depends on the availability of B and C. B is available if the time is larger than or equal to January 2025. C is available 3 years after the availability of B. Using this annotation, a user would have to manually evaluate the emerging equation for every point of interest. Although this is easy to solve for this small example, it becomes rather difficult with increasing complexity of a model such as our running example Fuse. For instance, the fuse’s availability depends on the vehicle’s TotalCurrent which in turn is the sum of all currents of all consumers inside the vehicle (cf. Figure 1). But the current of the central processing units changes over time due to the linear growth of TFLOPS in the block DetectionSoftware..

Figure 5: A trivial example of modeling time-dependency

To address this complexity challenge, we introduce a time slider as shown in Figure 6. The time slider enables users to change by dragging a handle (which looks like a car in our prototype) to a certain point of time. After the user has set , Iris evaluates the equation system and displays the result.

Figure 6: Time slider to view model at certain point of time in the future

6.4 Availability Highlighting

Simply hiding or showing a block depending on its availability does enable roadmap engineers to analyze time-dependency, availability, and changes over time. Therefore, to indicate time-dependency, we change the style of an element accordingly. We identified six different cases of availability that are relevant to roadmap engineers:

6.4.1 Always-available

If a block has no explicit requirements, we assume that it is implicitly always available. This can be regarded as a kind of baseline. Consequently, there is no need to highlight always available blocks in a special manner. As shown in Figure 1, the block Headlights is always available.

6.4.2 Currently-available

In contrast to always-available, if a roadmap engineer has explicitly modeled at least one requirement, a block becomes available, if all its requirements are evaluated to true at the current point in time. In Figure 7, block EFuse contains two requirements. The first one, MaxLoadCurrent >= Vehicle.TotalCurrent, is true or satisfied as indicated by a green background color. However, EFuse is not available because of the second requirement, as we will discuss in the next case.

Figure 7: Block EFuse is not yet available () but will be available in the future.

6.4.3 Not-yet-available

A requirement is currently not satisfied but will become satisfied in the future. We highlight this kind of requirement in yellow such that a roadmap engineer can identify the reason why a requirement is still not available, when it will become available, and why it is not yet available. This knowledge can be used to focus on specific research and development to speed up a new innovation.

Back to Figure 7, for the currently selected the second requirement T >= Jan2022 + [months(12)..months(36)] is evaluated to false which in turn leads to non-availability of EFuse. Changing to be greater or equal than January 2023 (being the date 12 months after January 2022) would maybe satisfy the requirement and thus the availability of EFuse. Changing to a value greater or equal than January 2025 satisfies the requirement and thus the availability of EFuse. This is because T >= Jan2022 + [months(12)..months(36)] is true for any .

6.4.4 No-longer-available

A requirement that has been satisfied before is no longer satisfied at the currently select point of time. This can be seen in Figure 1: EFuse’s MaxLoadCurrent is no longer greater or equal than the vehicle’s total current and is highlighted in orange.

6.4.5 Maybe-available

A requirement is maybe satisfied if the value of its expression evaluates to maybe. This is the case for the second requirement presented in Figure 7 and any in between January 2023 ( Jan2023, inclusive) and January 2025 ( Jan2025, exclusive), because the right hand side of the relation T >= Jan2022 + [months(12)..months(36)] evaluates to [Jan2023..Jan2025] (see Subsection 4.4), yielding the relation T >= [Jan2023..Jan2025].

6.4.6 Never-available

A block also can be never available. We use red color to highlight not satisfiable requirements to help roadmap engineers to identify required innovation.

Figure 8: Processing Units’ current increases over time () because of linearly growing TFLOPS of DetectionSoftware.

6.5 Time-dependency Highlighting

There are more aspects of time-dependency we did not cover so far. As already introduced, the value of a property can be time-dependent in the sense of changing over time. For example, as shown in Figure 8, the Current of the processing units increases linearly from in 2021 to in 2035 because it depends on the growing processing power in TFLOPS of DetectionSoftware modeled via formula linear(T, Jan2021,100, Jan2035,200). The result of a formula for current is always displayed next to a formula. Apart from the current value the change of a property’s value over time is also important for roadmap engineers. Therefore, we plot a property’s value curve. Figure 8 shows the curve of the increasing ProcessingUnits’ current.

Furthermore, we also plot satisfiability of blocks and requirements over time. Figure 7 shows the availability of EFuse. The plot shows that EFuse is not-yet-available for

, maybe available for a period of two years, and becomes available in 2025 but will be come unavailable again from 2027. This is the moment in time where the vehicle’s

TotalCurrent becomes greater than EFuse’s MaxLoadCurrent, because of the increasing Current of the processing units.

Figure 9: Overview of the availability of all blocks over time.

In order to further inspect the availability of blocks, we also provide a summarizing overview, as shown in Figure 9. Here, the availability of each block is displayed as a sequence of colored bars according to its availability. A green bar indicates the case currently-available, with a light blue indicating the sub-case always-available, whereas a yellow bar indicates the case not-yet-available and an orange bar the case no-longer-available. The half-green and half-red bar indicates the case maybe-available, the case never-available is not shown in this figure but would be visualized by a red bar.

6.6 Chosen technology framework

In the following, we sketch how we technically realized the presented language, the solving of the constraint system as well as the visualizations of the graphical models as well as the analysis results.

Our work was conducted in a collaborative research project with industrial partners from the embedded systems domain. They stated two key requirements in order to ensure applicability in industrial practice: (1) The system had to be web-based in order to avoid any local installation and ensure an easy central upgrade process, and offer a (2) high usability for industrial domain experts. Both requirements are in our opinion not met by today’s frameworks and technologies, e.g., the EMF-ecosystem, for the development of industrial-grade applications.

With respect to web-based systems, technologies like Sirius Web 222https://www.eclipse.org/sirius/sirius-web.html were not yet available when we started our research in 2018. With respect to usability, our personal experience stemming from nearly two decades of developing domain-specific modeling languages and tools (e.g. Burmester et al. (2004); Priesterjahn et al. (2007); Maro et al. (2015); Strüber et al. (2017); Tichy et al. (2020)) is that there is an inherent trade-off between productivity gained by using off-the-shelf modeling technology frameworks and flexibility gained by developing a modeling environment for a specific domain-specific language from scratch using the much broader programming language ecosystem. When starting the development, we chose the JavaScript/TypeScript-ecosystem333https://www.typescriptlang.org/ with the technologies Node.js444https://nodejs.org/ and React555https://reactjs.org/ to develop the tool as we valued the flexibility for our concrete case higher than the productivity gain of modeling technology frameworks. For example, this flexibility allowed us to easily visualize results from the solving inside the concrete graphical syntax as seen in Figures 7 and 8, support collaboration features in Iris (naturally embedded into the architecture of Flux  Pietron et al. (2021)), as well as easily integrate external systems from our industrial partners via REST-APIs.

Nevertheless, we use the standard ingredients of modeling language engineering: meta-models, models, model transformations, and a concrete graphical syntax. Our meta-model from Section 4.1 is defined using TypeScript interfaces. An important aspect here is that all attributes are read-only as our models (as JavaScript objects conforming to the interfaces) are immutable and form persistent data-structures. Hence, all user changes result in “new” immutable models while applying substructure sharing to minimize the memory impact. The immutability ensures that all changes are done centrally in change actions and not in arbitrary places throughout the code.

Our symbolic transformations are similar to in-place transformations, endogeneous model transformations that check a pre-condition on a specific part of the model and return an updated model part. Comparing these transformations to our own graph transformation framework Henshin Strüber et al. (2017), it is more difficult to match complex graphs in the pre-condition, however, mathematical calculations (which many of our symbolic transformations deal with) are much easier to express.

Finally, our React-based frontend follows the projectional-editing approach based on the Flux pattern Gackenheimer (2015) which blends well with the aforementioned immutable models. The parsing of mathematical expressions is realized by a recursive descent parser with subsequent stages for name resolving and type inference. The resulting syntax trees are represented with immutable algebraic data types, which simplifies the implementation of symbolic transformations and improves testability.

In summary, our experiences with the chosen technologies for our specific context have been generally good and we enjoy the gained flexibility and the power of the ecosystem, both in terms of development infrastructure as well as libraries.

7 Evaluation

The purpose of our conducted evaluation is to examine the applicability of Iris to a real-world innovation case by involving domain experts. This involves assessing the suitability of Iris for the realization of tasks regarding the modeling and analysis of innovations as well as identifying general potential for improvement. Thus, we ultimately examine Iris with respect to an inherent characteristic of second generation technology roadmaps, namely that, according to Letaba et al. Letaba et al. (2015), they enable the comparison between current technologies and potential innovations. Therefore, first we investigate whether the concepts Iris are fundamentally suitable to model aspects of the development of an innovation. Second, we investigate how domain experts perceive the usefulness of our modeling approach. To achieve this goals, this section first describes our research questions and case study design, the materials used, the execution, threats to validity, and the results of the evaluation.

7.1 Research Questions and Study Design

In order to clarify the objectives of our evaluation, we pose the following research questions:

  1. Does the Domain-Specific Language (DSL) of Iris support modeling and analyzing relevant aspects in the development of an innovation?

  2. How do domain experts perceive the support of Iris in modeling and assessing potential innovations?

With research question RQ1, we investigate to what extent Iris offers the possibility to capture and analyze the information necessary for the development and evaluation of an innovation. This includes, for example, the structural design of involved components, their dependencies, requirements, and properties. The aim is to identify weaknesses with regard to the maturity of the meta-model in Iris as well as its analysis capabilities. Research question RQ2 focuses on exploring the opinion of domain experts regarding the suitability of Iris for developing innovations. This includes an adequate and comprehensible representation of relevant information for domain experts. For this, qualitative feedback from domain experts plays a crucial role, as they are best able to evaluate how Iris compares to previously used roadmapping tools. From this, it can be deduced whether they consider our approach for technology roadmaps to be useful, which would improve its chances for adoption in industrial practice.

Our methodological approach to answering these research questions is reflected in our design and execution of the evaluation. We decided to implement the evaluation as a case study, as this is very well suited for the industrial evaluation of software engineering methods and tools Runeson et al. (2012). It enables an interactive, iterative and flexible approach, which we implemented in the form of a guided expert evaluation. For this reason, we formed focus groups Singer et al. (2008) from different levels along the automotive value chain, i.e., OEM, Tier 1, and semiconductor manufacturers (SCMs), for the purpose of data collection. Employees from three different companies participated in these focus groups. The task suitability of the Iris tool and missing concepts were discussed during these focus groups with the participating domain experts using a real-world example from industry. For this purpose, we used historical data from the prototypical development of a smart sensing fuse (cf. running example described in Section 3), from which we extracted the information about the innovation and the different considered alternative solutions as well as the used innovation process. These two aspects are described in more detail in the following subsection.

In our conducted guided expert evaluation, the following four roles were involved: The operation is taken over by a (1) tool operator who was involved in the development of the tool. We consider this to be appropriate because, on the one hand, the evaluation of the usability of the Iris tool was not the main focus of the study and, on the other hand, tool training would have meant additional effort for the domain experts from industry, for which the various participants could not provide resources to the same extent. In addition, due to the fact that the tool is operated directly by the tool developer, any usability difficulties that may arise in the operation of the tools do not cause irritation among the domain experts or distort the results of the actual measurement objectives – the task suitability of the tool, the assessment of the implemented concepts as well as the identification of missing concepts.

In addition to the role of the tool operator, our study design also includes the role of (2) moderator and (3) minute taker. Hence, the task of the moderator is to guide the evaluation and ask purposeful questions that are conducive to achieving the evaluation goal. For this purpose, the moderator has intensively studied the demonstrator beforehand in order to stimulate the thought process about the demonstrator among the domain experts during the workshops. While conducting the study, participants are asked to express their thoughts and comments regarding positively or negatively perceived aspects regarding the Iris tool. The minute taker has the task of recording the insights gained during the evaluation and the statements made by the domain experts. Furthermore, notes on technical aspects are taken, e.g., how well a specific aspect could be handled by Iris.

And finally, the participants from the automotive industry attending the focus group meetings took on the role of (4) domain experts.

7.2 Study Material

Since it is our goal to evaluate the suitability of our tool and the realized concepts in terms of supporting the identification of the need for innovation or still not yet existing technical solutions, we choose the smart sensing fuse as our object of study (cf. Section 3). To set up a realistic evaluation setting and utilize a suitable modeling artifact, we collected information about the past technical innovation process of the smart sensing fuse with a focus on required innovations, involved stakeholders along the automotive value chain, and challenges that arose during the development process. Therefore, we conducted multiple workshops with our partners from the automotive industry before the actual evaluation. As a result, we are able to identify involved value chain stakeholders as well as a set of evaluation scenarios (cf. Figure 10) that reflect the historical innovation development process and thus form the basis of our case study. Furthermore, we composed a collection of materials containing context and technical details that each study participant received before the conduction of the study.

Figure 10: Derived process of the past smart sensing fuse development

Involved Value Chain Stakeholders. The idea for the smart sensing fuse was developed by a joint work of an OEM and a Tier 1 and mainly driven by the latter one. Since a smart sensing fuse relies on a semiconductor, during the technical innovation process also SCMs contributed. In consequence, we require representatives from each type of manufacturer.

Evaluation Scenarios. Based on the collected information about the technical innovation process, we derived six evaluation scenarios (ESs) shown in Figure 10. An ES should simulate an excerpt of the whole technical innovation process addressing specific goals. For each ES we identified and listed the value chain partners involved, technical as well as content objectives, concrete tasks that should be completed within the ES, required artifacts (e.g., models created in a previous ES) which should serve as input, and the expected output of an ES, see Table 1. While content objectives describe which part of the technical innovation process should be done, technical objectives focus on features of Iris that should be used especially.

OEM

Tier 1

SCM

Content Objectives Technical Objectives
ES1 Based on future trends such as reducing a vehicle’s CO emission or save installation space, the OEM should derive the need for the innovation “smart sensing fuse”. The OEM should systematically and in a structured way capture the need for an innovation, describe the system context, and derive requirements to the Tier 1.
ES2 The Tier 1 develops different solution concepts for a smart sensing fuse. At that point of time, available semiconductors are not able to handle currents which are required by that specific car application. The Tier 1 should import the initial requirements by the OEM to Iris, define an initial solution space, and identify the need for an innovation, and derive requirements to the SCM.
ES3 The SCM constructs different approaches that could satisfy the requirements of the Tier 1. The SCM should import the initial requirements by the Tier 1 to Iris, define an initial solution space, and develop a first idea how to address the need for innovation. The SCM’s broad idea should be exported and sent back to the Tier 1.
ES4 Now, the Tier 1 is able to develop three concrete solutions (a discrete, a partially integrated and a fully integrated circuit). Furthermore, possibly added values by the smart sensing fuse compared to a blade fuse should be part of the KPI benchmark. Based on the idea for an innovation by the SCM, the Tier 1 should evolve its solution space. Solution options should be refined and benchmarked on the basis of internal KPIs.
ES5 a/b By consulting with the OEM and the SCM the Tier 1 is able to develop application-specific solutions. Possible problems and challenges should be identified as early as possible. Tier 1 discusses, refines, and evolves together with the OEM (5a) and the SCM (5b) the developed innovation. The functionality of Iris to analyze a model should support “what-if” discussions or discussions of alternatives in general.
Table 1: Summary of involved stakeholders, content, as well as technical objectives for each evaluation scenario

We intend that from ES1 to ES5 the model of the smart sensing fuse should be iteratively evolved. For this reason, at the end of each ES, the resulting model should be exported and should serve as an input for the subsequent ES. Figure 10 illustrates the derived ES and their order of execution. Each blue box represents and briefly summarizes the activity of an ES. An arrow between two ES represents a communication and information flow. ES1 to ES3 mainly focus the initial modeling of a solution space and requirements per manufacturer. The information flow can be described as top-down. In each ES mentioned before, only a single manufacturer is involved. In contrast, in ES4 and ES5a/b the communication flow is cyclic. Now, the OEM and the Tier 1 (ES5a) or the Tier 1 and SCM (ES5b) discuss and evolve the model together within the same session. Especially in ES5a/b, the features of Iris to analyze a model and support “what-if-analyses” should be used.

7.3 Execution

The execution of our study was based on the identified ES for the smart sensing fuse (cf. Figure 10). We were able to recruit five domain experts from automotive industry: 1) From the OEM three domain experts that have relevant knowledge but were not involved in the past technical innovation process participated. 2) From the Tier 1, the main engineer of the smart sensing fuse participated. 3) From the SCM, an engineer participated who is familiar with the domain, as well as cross-sectional as a development process methodologist.

Each ES was addressed in a separate session, which was held virtually via a video conference system. Every session was attended by the moderator, the minute taker, the tool operator for Iris, and the domain experts from one or two companies regarding Table 1.

The study was conducted over a period of two weeks in ten sessions in August 2020. Each session lasted between 1 and 2.5 hours. Before every session, we handed out the collection of material describing the smart sensing fuse, a short video describing the features of Iris from a user’s perspective, and the textual description of the evaluation scenario (cf. Table 1) to the participants.

7.4 Threats to Validity

Our study reveals some threats to validity, which we explain below. A threat regarding construct validity is a different level of knowledge of the respective participants (i.e., the domain experts) concerning the considered smart sensing fuse. We tried to reduce this risk by providing all participants with information about the smart sensing fuse and its historical innovation process before the respective sessions. Nevertheless, insufficient preparation of the participants, which takes place individually before the respective sessions, cannot be completely excluded. Moreover, there is a threat that the case under consideration does not correspond to real practice. We have countered this by selecting the smart sensing fuse as a representative and real example from the automotive industry and reconstructing its innovation process together with experts in the field to conduct our study. Furthermore, we avoided a biased view or evaluation of our tool by recruiting different domain experts along the value chain for our study. The lack of prior knowledge with the tool by the domain experts may also have a negative impact the study. In order to counter this threat, we opted to utilize expert tool users, such that the domain experts can focus on the modeling itself.

The internal validity is limited by the predefined phases of the innovation process, which was based on historical data. In this respect, there is an influence of the participants by the provided material but also by the guided focus groups, since it is conceivable that participants relied too much on these guidelines and thus possibly inhibited the creation of new situations. In contrast, the preparation of the historical innovation process offered the opportunity to discuss and reflect on real problems that had arisen. In addition, we cannot exclude the possibility that the focus group meetings in the form of video conferences may pose a threat. While in a face-to-face meeting it might be more obvious for the moderator whether a participant wants to comment on a certain aspect, a non-verbal communication is sometimes difficult to recognize via video conference. Moreover, influences on the results during the analysis were reduced by having two independent researchers evaluate the collected data and combine them into problem classes, which were then discussed.

Case studies generally have a low external validity due to their focus on individual cases. Hence, there exist some threats in this regard. With five domain experts, we have only a small number of participants, which is not necessarily representative. Nonetheless, despite the difficult availability of domain experts from industry, we still managed to get domain experts from three different companies representing all roles along the value chain to participate in our study. In this respect, it should be noted that the majority of the participants did not develop the smart sensing fuse themselves and were therefore required to make ad-hoc considerations about its requirements during the sessions. The advantage of these participants is that they are not influenced by the historically past events and are therefore not biased in their thinking.

A threat to the reliability of our study is posed by the questions asked by the moderator during the sessions. Since these questions were not prepared, but arose from the respective situation and the sessions themselves could not be recorded due to confidentiality restrictions on the part of the industry participants, it would not be possible to reproduce the sessions identically. The minutes only contain the identified comments of the experts, without any judgement of the minutes-taker. However, the reproducibility of the extracted results is guaranteed, since the minutes were used as a basis for this.

7.5 Results

Based on the minutes recorded during the sessions by the minute taker, two independent researchers analyzed the data and discussed their results. Thereafter, the researchers structured the identified problems and potential improvements mentioned by the domain experts. In a joint meeting, the collected results were consolidated. In this section, we present the results of the evaluation as follows: 1) We describe the result artifacts, i.e., the developed models (cf. Section 7.5.1). 2) We present identified technical findings that form the basis for answering research question RQ1 (cf. Section 7.5.2). 3) Furthermore, we present the qualitative feedback from the domain experts about the usefulness of our approach w.r.t. research question RQ2 (cf. Section 7.5.3).

7.5.1 Iris Models

Within the evaluation scenarios 1 to 5 (see Section 7.2, Figure 10), different models were created specifying relevant information and representing the solution space exploration at the OEM, Tier 1 and SCM levels. In total, the models consist of about 349 elements, where 43 are components, 211 are properties, 52 are requirements, 31 are notes, and the remaining elements are distributed among the other element types.

On the one hand, the models created during the evaluation represent the OEM’s requirements for a new type of smart sensing fuse for various electrical consumers, such as starter, rear headlamp, gear oil pump, sound system, front loader, etc.. On the other hand, these requirements for the EFuse are forwarded to the Tier 1, so that the latter can build up two solution spaces – implementation by means of an integrated highside switch and a MOSFET plus ASIC. For the implementation of a switching element with protection function and reverse protection, Tier 1 set up a benchmarking with regard to three concrete solutions available on the semiconductor market. This benchmarking from Tier 1 was based on solution spaces that the SCM communicated to the Tier 1 including two smart power switches and one integrated gate driver. These alternative solutions were compared based on a total of 19 properties (e.g., over temperature protection, surface, rated current, switching time, time availability) and four basic requirements (reverse protection, cold crank pulse duration, big capacity load, intrinsic protection), of which only one solution adequately met all needs.

7.5.2 Technical and Conceptual Findings Regarding the Implemented Metamodel (RQ1)

Overall, the Iris tool proved to be robust during the entire evaluation, so that no system crashes occurred. The created models could be saved and reloaded without loss of information. Nevertheless, during the evaluation as well as during a preceding intensive test phase, a total of 82 technical findings were uncovered and most of them were resolved before the evaluation began. These included, for example, deficiencies with regard to interaction with the user interface, such as when moving blocks, display errors, problems with key combinations, when scrolling, when entering spaces or when zooming.

Basically, during the evaluation sessions, the various information of the EFuse historical demonstrator could be captured and presented at different levels of the value chain using Iris. This mainly includes the definition of components, their interrelationships as well as the abstract textual specification of requirements from the OEM side, such as “can reconnect loads” or “can measure current flow”. We would like to emphasize at this point that Iris does not claim to provide the full functionality of requirement management tools, as this is not decisive with regard to feasibility analyses in the early phases of an innovation’s development. Furthermore, especially on the side of Tier 1, properties could be defined that further specify the requirements, such as “big capacity load >= or “cold crank pulse duration <= . Iris also enabled calculations to be performed, such as the surface area and the required cable cross sections. Hence, requirements and properties could be defined and their fulfillment determined using the Iris internal solver. In addition, Iris made it possible to standardize redundant information through the use of inheritance relationships and thus reduce the scope of the model.

In addition, the participation of industry partners during the evaluation also resulted in the identification of some complementary conceptual findings. This includes, for example, a previously missing possibility to specify conditional existences of blocks or implications. For example, a discrete MOSFET requires temperature monitoring as opposed to an integrated solution. This implies a backward communication of requirements from the SCM to Tier 1 regarding the need to ensure temperature monitoring. This means that the existence of certain components implies the existence of other components, as in this example of a temperature sensor. Another example in the context of the EFuse is the material of the EFuse’s housing. For better heat dissipation, the Tier 1 may install a housing made of aluminum. However, this implies that the OEM must also ensure that elements that realize heat exchange are included in the design of the system. This requires the possibility of bidirectional communication of requirements and conditions of existence between Tier 1 and OEM.

Moreover, change management mechanisms are currently missing in Iris. This means that exporting and updating along the value chain is currently technically possible, but there are no mechanisms to make changes traceable for the various partners involved and to inform them of a change that has been made. We observed this during the evaluation because it was difficult for different partners along the value chain to figure out which elements were changed, added, or removed after re-importing a model. This reveals that change detection and difference visualization mechanisms are needed and also desired by domain experts.

Furthermore, in the context of solution space evaluation, it was considered interesting by the participants in the evaluation to be able to assign a confidence level to the values and value ranges communicated by a partner. It should be possible to actively communicate how sure the respective modeler or domain expert is about the specified value. Is it a hard fact or a rough estimate. Moreover, the domain experts considered a priority level for key performance indicators (KPIs) to be useful. Currently, Iris lacks a possibility to weight internal KPIs so that a solver can automatically select the best solution evaluated according to individual KPIs. Regarding this, the importance of KPIs must also be taken into account, because not all expected properties are equally important. Therefore, a possibility to classify KPIs, such as “Must be fulfilled”, “Possibly fulfilled”, “Nice to have”, is missing in the context of the solution space evaluation.

Based on the above findings presented, we can derive an answer to research question RQ1 in the following. In the evaluation, we were able to map relevant aspects of the development of an innovation and therefore we believe that Iris fundamentally supports the modeling of innovations. This means that the concepts provided in Iris, such as components, their interrelationships, inheritance hierarchies, properties and requirements, provide a good basis for modeling innovations. Therefore, the Iris metamodel was sufficient for modeling these aspects, as it was possible to model the thoughts of the domain experts together with the tool operator using the DSL of Iris during the evaluation sessions. Nevertheless, there is potential for extending the concepts already implemented (e.g., by adding change management mechanisms or confidence levels). It should also be noted, however, that it would be quite necessary for domain experts to undergo training in the usage of Iris beforehand, so that they would be able to create such models on their own.

7.5.3 Qualitative Feedback From the Domain Experts (RQ2)

The qualitative feedback on Iris from the domain experts is generally positive. Compared to MS Excel or MS PowerPoint, which are usually used for representing roadmaps in an industrial context, as reported by the participating domain experts, Iris offers [the domain experts] the advantage of being able to represent relationships more quickly, since there are predefined elements such as ready to use blocks with a well-defined semantic”. Iris also “enables the presentation of complex facts and relationships, which provide information that can also be subjected to respective benchmarking”. Furthermore, the domain experts appreciate that the application of the Iris tool opens up the possibility of considering several solution spaces in parallel which is not the case in current practice. As domain experts tell us, the solution spaces are too large or there are too many solutions to consider in detail, considering the techniques available today. In contrast, the participating domain experts assume that solution spaces can be constructed “more easily and quickly with Iris. Thus, Iris offers the possibility to evaluate even complex systems and to consider edge solutions, which in turn can lead to insights that would not have been considered in today’s context. For the domain experts, it was possible to perform analyses based on the data received from other partners and thus to benchmark alternative solutions as well (in particular benchmarking on the part of Tier 1). This made it possible to compare them clearly.

Moreover, collaboration between the industry partners and the exchange of information between these partners along the value chain was possible, at least to a limited extent. However, the participants noted that in a real-world scenario exporting a whole model is not compliant and might reveal intellectual property. Thus, a way to export only certain aspects of a model or export some kind of a black box needs to be developed. This feedback from domain experts is related to the addressed need for implementing change management mechanisms, as already described in the previous section (see Section 7.5.2).

We think that these statements of the domain experts represent an answer to research question RQ2. The domain experts give Iris positive feedback compared to the tools used so far, especially regarding the analysis of solution spaces. The basis for this is also the good comprehensibility of the models and the elements they contain. The domain experts also positively emphasized the good comprehensibility of the composition of calculations, whose highlighting (cf. Figure 4) is based on the familiar behavior of Excel. This enabled the domain experts to quickly find their way around.

In summary, the evaluation showed that Iris enabled the various contributions of the partners along the value chain to be captured separately and in a structured manner. For this purpose, the modeling language implemented in Iris and derived from the meta-model turned out to be suitable for capturing the thought processes of the domain experts in a structured manner. The system structure, which is relevant for the development of an smart sensing fuse, could be modeled with all innovation-relevant components, requirements, and properties according to the underlying historical demonstrator using Iris. It was possible to use measurement units and formulas and to define solution spaces at the various levels along the value chain, which is also reflected in the positive feedback from the domain experts.

8 Conclusion and Future Work

In this paper, we presented a model-based approach based on a domain-specific modeling language for the creation of technology roadmaps as well as a corresponding interactive user interface. The modeling language supports time-dependent properties and the time-dependent availability of structural elements and thus allows the description of a range of various valid models over time. In addition, the language supports interval arithmetic and ternary logic to model uncertainty, such as uncertainties of the domain experts which are expressed in inaccurate values, as they usually occur in the course of an innovation process. The solver integrated into Iris solves the global system of equations spanned by properties, requirements, and KPIs using repeated symbolic transformations enabling the use of traceability information in the visualization. In addition, the realized visualization and interaction concepts in Iris support the evaluation of time dependencies within properties and structural components of a technology roadmap. This allows roadmap engineers to easily substantiate technology decisions and apply updates to the technology roadmap.

Furthermore, we have shown in the evaluation that the modeling language and the visualizations in the Iris tool are suitable to capturing the thought processes of industrial domain experts during an innovation process in a structured and understandable manner. It turned out that the visualizations and interaction concepts implemented in Iris are intuitively understandable and applicable for roadmap engineers, who are domain experts but not modeling experts.

However, the modeling itself has not been performed by domain experts themselves, which necessitates additional empirical work. In the future, we will therefore examine the suitability of our developed embedded expression language with respect to the mapping of relevant concepts from a roadmap engineer’s point of view. To enable this, we are currently working on a modeling methodology for Iris that supports domain experts in deciding how to use the presented domain-specific modeling language and our tool. Project partners have recently published a modeling methodology in Fakih et al. (2021) that is independent from the specifics of our domain-specific language. We are currently collaborating with them to identify how well their modeling methodology aligns with our language. A particularly interesting aspect is how detailed each company in a value chain models its relevant aspects as well as when and how often models are exchanged between partners in the value chain.

Finally, we will take a closer look at the visualization and exploration of the model-wide solution space.

Acknowledgements

This work has been developed in the project GENIAL! (reference number: 16ES0875). GENIAL! is partly funded by the German Federal Ministry of Education and Research (BMBF) within the research programme ICT 2020.

Further, this work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 453895475.

We would like to thank our academic and industrial project partners for extensive discussions and valuable insights.

In the course of this article, several screenshots of Iris are shown. The screenshots include icons of the following authors:

Appendix A Generated Constraint System for Figure 1

1AutonomousDriving.?requirement1(T) = PowerSupply.?availability(T)
AutonomousDriving.?requirement2(T) = ErrorDetection.?availability(T)
AutonomousDriving.?availability(T) = AutonomousDriving.?requirement1(T) & AutonomousDriving.?requirement2(T)
AutonomousDriving.?replacement(T) = -1
Vehicle.TotalCurrent(T) = Headlights.Current(T) + ProcessingUnits.Current(T)
6Vehicle.?availability(T) = Headlights.?availability(T) & DetectionSoftware.?availability(T) & Autopilot.?availability(T) & ProcessingUnits.?availability(T) & Fuse.?availability(T)
Vehicle.?replacement(T) = -1
Headlights.Current(T) = 5A
Headlights.?availability(T) = true
Headlights.?replacement(T) = -1
11DetectionSoftware.TFLOPS(T) = linear(T, Jan2021, 100, Jan2035, 200)
DetectionSoftware.?availability(T) = true
DetectionSoftware.?replacement(T) = -1
Autopilot.TFLOPS(T) = 50
Autopilot.?availability(T) = true
16Autopilot.?replacement(T) = -1
ProcessingUnits.PowerConsumption(T) = ((DetectionSoftware.TFLOPS(T) + Autopilot.TFLOPS(T))/80) * 200W
ProcessingUnits.Current(T) = (ProcessingUnits.PowerConsumption(T) / 12V)[A]
ProcessingUnits.?availability(T) = true
ProcessingUnits.?replacement(T) = -1
21Fuse.?kpi1(BlFuse,T) = num(BlFuse.Watchdog(T))
Fuse.?kpi1(EFuse,T) = num(EFuse.Watchdog(T))
Fuse.MaxLoadCurrent(T) =
         if Fuse.?replacement(T) = 1 then BlFuse.MaxLoadCurrent(T)
    else if Fuse.?replacement(T) = 2 then EFuse.MaxLoadCurrent(T)
26    else [-inf..inf]*1A
Fuse.Watchdog(T) =
         if Fuse.?replacement(T) = 1 then BlFuse.Watchdog(T)
    else if Fuse.?replacement(T) = 2 then EFuse.Watchdog(T)
    else maybe
31Fuse.?requirement1(T) = (Fuse.MaxLoadCurrent(T) >= Vehicle.TotalCurrent(T))
Fuse.BatteryVoltage(T) =
         if Fuse.?replacement(T) = 1 then BlFuse.BatteryVoltage(T)
    else if Fuse.?replacement(T) = 2 then EFuse.BatteryVoltage(T)
    else 48V
36Fuse.?availability(T) = Fuse.?requirement1(T)
       & if Fuse.?replacement(T) = 1 then BlFuse.?availability(T)
    else if Fuse.?replacement(T) = 2 then EFuse.?availability(T)
    else true
Fuse.?replacement(T) = index_of_max(
41   if BlFuse.?availability(T) then Fuse.?kpi1(BlFuse,T) else -inf,
   if EFuse.?availability(T) then Fuse.?kpi1(EFuse,T) else -inf
)
PowerSupply.?requirement1(T) = (Fuse.MaxLoadCurrent(T) >= Vehicle.TotalCurrent(T))
PowerSupply.?availability(T) = PowerSupply.?requirement1
46PowerSupply.?replacement(T) = -1
ErrorDetection.?requirement1(T) = Fuse.Watchdog(T)
ErrorDetection.?availability(T) = ErrorDetection.?requirement1(T)
ErrorDetection.?replacement(T) = -1
BlFuse.?requirement1(T) = (BlFuse.MaxLoadCurrent(T) >= Vehicle.TotalCurrent(T))
51BlFuse.BatteryVoltage(T) = 48V
BlFuse.MaxLoadCurrent(T) = 50A
BlFuse.Watchdog(T) = false
BlFuse.?availability(T) = BlFuse.?requirement1(T)
BlFuse.?replacement(T) = -1
56EFuse.?requirement1(T) = (EFuse.MaxLoadCurrent(T) >= Vehicle.TotalCurrent(T))
EFuse.BatteryVoltage(T) = 48V
EFuse.MaxLoadCurrent(T) = 45A
EFuse.Watchdog(T) = true
EFuse.?requirement2(T) = (T >= Jan2022 + [months(12) .. months(36)])
61EFuse.?availability(T) = EFuse.?requirement1(T) & EFuse.?requirement2(T)
EFuse.?replacement(T) = -1

Appendix B Symbolic Transformations

  • Constant Folding. Subexpressions involving only constant values are reduced by evaluating them as far as possible.

  • if max(2, 3) >= 2.5 then 1 else 2 1

  • Propagation. References are replaced by their inferred values.

  • x = [1..5] & y = x x = [1..5] & y = x & y = [1..5] Note that in case of uncertain intervals the original subexpression y = x remains in case stronger bounds for x are inferred later on.

  • Neutral Element Removal. Neutral parts of expressions are removed.

  • x = (y & true) x = y

  • Reordering. Commutative operations are reordered to achieve a normal form where possible and allow for subsequent simplifications.

  • x + 2 = -x + 3 x + x = 3 - 2

  • Raising/Lowering. Subexpressions are propagated up and down the syntax tree to allow for subsequent simplifications.

  • 2 * (if x then 3 else 4) if x then 2 * 3 else 2 * 4

  • Merging. Identical subexpressions are merged where possible.

  • y + y 2 * y

  • if z then x else x x

  • (x < y) & (if x < y then z1 else z2) (x < y) & z1

  • Special Case Detection. Various properties of functions are used to simplify commonly occurring cases.

  • linear(x, x1, 0, x2, 1) >=  true

Appendix C More Details on the Domain-Specific Language

c.1 General Syntactic Elements of the Expression Language

Our textual expression language provides the following general language elements. Sections 4.3.1 and 4.4 extend this set of elements with dates, durations, and uncertainty.

  • Boolean constants, i.e., true and false

  • Numeric constants, optionally suffixed by a SI-unit, e.g., 70.5, 12V, 400mA

  • The constant inf, denoting positive infinity

  • Interval expressions, i.e., [lower..upper], specifying a closed interval of values that will be propagated through all arithmetic operations. We use intervals to represent uncertain values that are known to not exceed a certain interval, e.g., [1.5..2.5]. Section 4.4 contains a more detailed description of arithmetic operations performed on intervals.

  • Identifiers which reference the properties and blocks specified in the model, e.g., TotalCurrent, or Vehicle.Fuse.Watchdog. If an identifier references a block, the resulting value is the boolean availability of the block. If an identifier references a property, the resulting value is the solver result of the property formula.

  • Arithmetic operators {+, -, *, /, ^}, e.g., 2 * 12V

  • Relational operators {<, <=, >, >=, ==, !=}, e.g., x > y

  • Boolean operators {&, |, !}, e.g., a & !b

  • Parenthesis, e.g., a * (b + c).

  • Conditional expressions if c then t else e, which evaluate to t if the condition c evaluates to , and e otherwise.

  • Aggregations { SUM, PRODUCT, AND, OR, MIN, MAX, UNION}. An aggregation, like SUM(expr), evaluates expr for all direct children of the block containing the expression, and computes a function, like , over their values. Aggregations are used in roadmap models to construct parent blocks that combine properties of their children in a uniform way, like computing the total current of all subsystems.

  • Utility functions { sin, cos, exp, ln, log, sqrt, min, max, num}. The syntax for function calls is f(arg1, ..., argn). The set of functions was defined based on needs during our tests and evaluations, and can be extended easily.

  • An interpolation function

    linear(x, , , ..., , ), which computes the linearly interpolated value y at position x in the sequence of (,)-values. The x-values are usually instantiated with time-based values, which will be introduced in the next section. This linear-function in particular could also be defined using nested conditional expressions, but we include it as a concise and intuitive way of specifying values that vary over time.

c.2 Arithmetic and Relational Operations on Dates and Durations

In our language, we use the following arithmetic and relational operations and typing rules ( represents the relational operators , , , , , ):

Note that computes the duration between the two dates, whereas other combinations, like , are explicitly disallowed as they would also depend on the neutral element of the underlying calendar system (i.e., year 0). however is allowed to ensure commutativity of the operator.

c.3 More on Interval Operations

This subsection provides a more detailed look at the behavior of interval operations in the domain-specific language. The behavior should be intuitive to those who are already familiar with interval operations.

Let a, b, c and d be numeric constants, as defined in Section 4.2, represents any of the basic arithmetic operators: , , , and :

[a..b]
[a..b]
[a..b]
[a..b]
[a..b]

After each operation the invariant is ensured by swapping the resulting bounds a and b if necessary. The split interval for will be joined in order to create a single consecutive interval: . Therefore a division like [..]/[..] yields the result (..), because leads to the following multiplication