DeepAI
Log In Sign Up

Using DevOps Toolchains in Agile Model-Driven Engineering

11/23/2021
by   Jörn Güy Suß, et al.
0

For Model-Driven Engineering (MDE) to become Agile, it is has to be usable with and for DevOps as the technical basis of Agility. We describe our experiences in implementing and applying the BB8 architecture that provides a means to reuse Eclipse-based MDE toolkits within an integrating workflow system. We describe the use of the architecture with in the Gitlab DevOps system and discuss future research fields that emerge as the result of our implementation.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/23/2019

Implementing Large-Scale Agile Frameworks: Challenges and Recommendations

Based on 13 agile transformation cases over 15 years, this article ident...
05/27/2021

How to Integrate Security Compliance Requirements with Agile Software Engineering at Scale?

Integrating security into agile software development is an open issue fo...
05/18/2021

Da Vinci – Architecture-Driven Business Solutions

This document has emerged out of Origin's past experiences with architec...
07/09/2020

Reusable Learning Objects: An Agile Approach

This paper discusses Reusable Learning Objects (RLOs) and to what extent...
09/05/2022

Aspects of Modelling Requirements in Very-Large Agile Systems Engineering

Using models for requirements engineering (RE) is uncommon in systems en...
06/04/2020

Framework for an Integrated Learning Block with CDIO-led Engineering Education

As a CDIO collaborating member, the School of Mechanical and Electrical ...

1 Introduction

MDE has a strong representation in teaching and research contexts, while in industry, it mainly reaches success in areas with firmly established domain languages (DBLP:conf/models/WhittleHRBH13). For MDE to be more widely accepted into Agile teams, more modern development practices like DevOps must be considered.

To facilitate further adoption, we must address the current barriers to adoption and enable Agile MDE using DevOps. Modern DevOps practices combine tools into pipelines to improve quality and increase release cycle speed. This article provides an experience report and architecture description reflecting our companies journey to make MDE Agile by using DevOps and toolchains as a vehicle.

When we began this project, we found that industrial practice and requirements in our company were quite different to those for teaching and research: Artifacts need to be rigorously built, tested, integrated and deployed in an efficient and fast manner. These requirements are also at the heart of the Agile approach to software production that we practice. It is worth pointing out that Agile approaches have always been strongly connected to automation or mechanisation to support a fast turn-around, and provide feedback on quality in small increments (fowler05integration).

Figure 1: MDE Grand Challenges from DBLP:journals/sosym/BucchiaroneCPP20

In the model-driven community, discussions about the causes of the lack of general industrial adoption have occurred for a long time (DBLP:conf/models/StraetenMB08; DBLP:journals/sosym/BucchiaroneCPP20). However, while the community seems to look at the gains and successes of Agile approaches with some envy, it seems to pay less attention to the technical foundations that enable it: Figure 1 shows a taxonomy of what community consensus perceived as the Grand Challenges to the adoption of MDE in 2020: Continuous Integration (CI) receives a marginal mention; Continuous Deployment (CD) and DevOps, the practices and operations that drive Agile processes at large these days, are not present at all. Especially DevOps - the programming of the process that produces and deploys software quickly, efficiently and with timely pertinent feedback to stakeholders (DBLP:journals/usenix-login/Debois11) - is pivotal to a contemporary software developers’ workflow.

From our perspective as a software development company, current MDE tooling does not meet the technical requirements to deliver Agile operations in industrial practice. This article posits that if the MDE community wants to have practical industrial relevance, and operate in an Agile manner, it must consider DevOps and CI/CD a first-class concern. It must also provide for horizontal reuse, allowing all MDE tooling to be reused within programmed workflows.

This prompts the question, why it is challenging to horizontally integrate and reuse existing MDE tooling and how one would meet that challenge? In the remainder of this section, we investigate the impediments we have experienced in detail. In section 2 we describe how our architecture is designed to meet the challenges using a combination of tried off-the-shelve components and minimal additions.

1.1 Challenges of Using Eclipse

The common foundation for most MDE offerings is the Eclipse Modelling Framework (EMF); EMF is a light-weight Java-based descendant of the Meta-Object Facility (MOF), the original meta-modelling service embedded in the complex Common Object Request Broker Architecture (CORBA). The dependency on EMF implies that most tool-sets are also expressed within the Eclipse Rich Content Platform (RCP) or have some dependency on it.

RCP is a challenging platform to reliably write software for. It consists of components, has evolved over 20 years, and substantially relies on frameworks and declarative mechanisms. Compounded by thin documentation and challenging access to source code, Eclipse is a major hurdle in the adoption of EMF-based software.

At the same time, RCP is feature-rich and its User Interface (UI) facilities are directly linked with generated artifacts that EMF produces. This more often than not leads to close coupling of the model-driven tools to the Eclipse UI. For research demonstrators or teaching use, this does not cause a problem and even provides an advantage to productivity due to tight and direct reuse.

If we want to retain access to functionality that is currently implemented, we need to accept Eclipse RCP as a platform. Most MDE toolkits offer some automation. But how do we co-ordinate the execution of all MDE toolkit functionality within a single platform? If we would be able to show this to be practical, we could commercially combine high-quality existing offerings to create exciting new solutions. Showing this functionality might eventually entice MDE tool developers to invest in a separation of their user interface from their services functions and models, ultimately leading to a pool of Agile DevOps-capable MDE toolchain components.

1.2 Ant in Eclipse: A minimal vehicle for Mde Ci/cd

Historically, Eclipse has an interactive, declarative build system. It consists of a file-system that detects changes, and Builders that respond to them and update derivative files and views in various ways. This mode of development was also initially used for the development of Eclipse components themselves. However, it quickly became clear that this approach did not scale.

Consequently Eclipse’s Plugin Development Environment (PDE) was extended to be controlled by the Ant build system. At that time, Ant was the standard build facility for Java-associated projects. Since PDE made use of Eclipse internals, an Eclipse host had to be running so the Ant build tasks for Eclipse build automation could use the underlying framework-based facilities. This resulted in the design of a headless Eclipse host application known as the AntRunner that thinly wraps around the Ant execution engine.

Since the following discussion heavily depends on an understanding of Ant, we will give a limited characterisation here. Ant is a build system that exposes Java-written functionalities in an XML document using elements called Tasks. Tasks are implemented as Java classes that are dynamically loaded at runtime. Task instances are grouped into sequences of functionality called Targets that represent desirable outcomes to the end-user. Targets may depend on each other. This makes Ant build specifications directed multi-graphs. The user selects a Target and the engine traverses and executes prerequisite Targets depth-first. The Ant runtime has a map-like storage for global state and a facility intended for asserting facts in so-called Properties. Task instances can interact with the global state and make assertions, affecting execution. Tasks may dynamically invoke targets. This property, combined with the global mutable state, means that Ant is effectively Turing-complete.

To run an Ant build a user minimally has to provide an Ant build file, any initial properties, the list of Targets to build and implementations for all Tasks that will be invoked in the build execution. Further, Ant has an input handling mechanism that can synchronously obtain information from a user at runtime. 1 shows an example build file from the Ant online documentation to exemplify these concepts.

1<project name="MyProject" default="dist" basedir=".">
2
3  <description>
4    simple example build file
5  </description>
6
7  <!-- set global properties for this build -->
8  <property name="src" location="src"/>
9  <property name="build" location="build"/>
10  <property name="dist" location="dist"/>
11
12  <target name="init">
13
14    <!-- Create the time stamp -->
15    <tstamp/>
16
17    <!-- Create the build directory structure used by compile -->
18    <mkdir dir="${build}"/>
19
20  </target>
21
22  <target name="compile" depends="init"
23          description="compilethesource">
24
25    <!-- Compile the Java code from ${src} into ${build} -->
26    <javac srcdir="${src}" destdir="${build}"/>
27  </target>
28
29  <target name="dist" depends="compile"
30          description="generatethedistribution">
31
32    <!-- Create the distribution directory -->
33    <mkdir dir="${dist}/lib"/>
34
35    <!-- Put everything in ${build} -->
36    <!-- into the MyProject-${DSTAMP}.jar file -->
37    <jar jarfile="${dist}/lib/MyProject-${DSTAMP}.jar"
38         basedir="${build}"/>
39  </target>
40
41  <target name="clean"
42          description="cleanup">
43
44    <!-- Delete the ${build} and ${dist} directory trees -->
45    <delete dir="${build}"/>
46    <delete dir="${dist}"/>
47  </target>
Listing 1: An example of an Ant build file.

Most MDE tools delivered as Eclipse features include tasks for the AntRunner. They generally use the interactive Ant execution facility built into the IDE itself to provide a simple build process scripting facility scoped to the tool at hand. Consequently, AntRunner is a minimal point of integration for a large number of EMF/MDE tool-sets that otherwise would require custom-coordination written in Java. Examples of components that expose through Ant include Epsilon, Model Workflow Engine 2 (Xtext), Atlas Transformation Language, and the Eclipse Modelling Framework Core. In addition, Eclipse Launch Configurations can be started from Ant, which enables the use of tooling provided by Modisco’s Workflow Component. Finally, there is a fair supply of other software engineering Ant tasks ranging from specific system control to general purpose language facilities.

1.3 Reusing Ant Automation

However, components integrated via the AntRunner suffer from four main issues that make use of MDE in a CI/CD environment challenging: The components frequently depend on Eclipse UI features, perform slowly due to interpreted and reflective languages, have issues with parallelisation, and have classpath issues due to Ant’s limited isolation. The following paragraphs explain these issues in detail.

As stated, most EMF/MDE tools are written for interactive use in RCP and hence reference the UI facilities and the Eclipse project-management file system, known as the workspace. CI/CD and DevOps are inherently using console-based interfaces. Build servers in the cloud do not have graphics cards. Consequently this mismatch causes issues whenever these Eclipse facilities are required in a headless run, but are not available, because they only exist when the UI is active. For example, Epsilon, one of the premier integration kits for MDE, has numerous dependencies from its workflow to the UI. It is possible to compensate for this issue by using an X11 virtual framebuffer, as long as no interactions are required from the graphical interface, but ultimately this is an unsafe procedure and can lead to unexpected locking behaviour the causes of which are very hard to diagnose.

MDE toolsets often use weakly typed languages to execute interpreters that work using reflection, to achieve the flexibility that provides the advantage of MDE tooling. Safety and speed compared to compiled languages is generally poor, so tool performance and code maintenance also scale poorly. Epsilon, for example, uses untyped languages that are reflectively interpreted over an abstracted feature-based metamodel. This occasionally leads to unexpected type errors when typed expressions are used, in turn invalidating the use of types as a general practice.

MDE toolsets are often written with shared state in mind. This is another side-effect of writing for the Eclipse IDE’s UI. It assumes that a single user operates sequentially on a single set of data. Static and global registrations and unsynchronized accesses are common. For CI workflows this has the implication that it is generally difficult to accelerate a build execution by running parts of the work in parallel. Experimenting with Epsilon, we found it usually causes a ConcurrentModificationException when used in an Ant parallel block. This compounds the performance issue, as parallelization would usually be a strategy to compensate for low interpretation speed.

One of the weaknesses of Ant itself is that it does not manage tasks’ dependencies as components, which can lead to classpath clashes. The Eclipse AntRunner however manages dependencies using the Eclipse dependency management system. This allows Ant to be used in a reliable fashion. Hence this weakness mostly affects Tasks that were written without the Eclipse container in mind. Packaging them in Eclipse resolves this problem for such components.

As a company, we require the full scope and breadth of tools available in the MDE community to create exciting and novel solutions for our customers. Hence enabling reuse is our overriding architecture goal. To us, this goal is at this time more important then model-specific aspects like those raised in  DBLP:conf/models/0001KP20. We aim to compensate the shortcomings named above through engagement in the community. We hope this will help to shape the discussion. This article is intended as a contribution to this goal.

The following section describes the BB8111Since our company is based on making coding support systems known as Bots available to the public, and we need to build these internally, we dubbed the system architecture BotBuilder. As it offers twice the variability, speed and access to tools as compared to its predecessors, its version number is . And we admit a slight nod to Star Wars as well. architecture we have developed to allow this reuse.

2 BB8: An Architecture for Mde DevOps

The following sections describe the architecture of our system from the inside out, giving a developers view (sections 2.1-2.3) and an operations view (section 2.4- 2.7).

The developers view starts with our reuse of Eclipse’s Ant Support (2.1) to run workflows in CI. Section 2.2 describes how workflows can be tested using JUnit. Section 2.3 shows how Ant users can test workflows without requiring knowledge of Java or Eclipse.

The operations view begins in section 2.4 with an explanation of how components of the workflow can be installed and removed. It addresses how a user can get the desired MDE toolsets and other tools for running a specialized workflow. Section 2.5 describes how the workflow runtime can be used everywhere, without installing anything locally and with faithful reproduction of results. Section 2.6 describes how we deploy that runtime in the CI system provided by Gitlab. Finally 2.7 describes how execution is coordinated within cloud resources.

Figure 2: A model-driven task runs as the result of a commit to Gitlab CI/CD.

2.1 Executing Mde Workflows using AntRunner

As described in subsection 1.2, Eclipse AntRunner is a headless Eclipse application, which is able to execute the Ant build system, while providing access to the Eclipse plugin system and the underlying framework runtime. As noted this allows the reuse of model-driven tooling provided by other projects within the Eclipse scope. It also allows access to all of the EMF facilities.

Also, the PDE/Build system used for creating Eclipse components and applications is available in the AntRunner. While PDE/Build is not a reliable means to build large-scale Eclipse applications, it is capable of building small applications reasonably. This means that an on-board solution for compiling and packaging model-driven components is in principle available to every workflow without a need to learn additional build facilities like Tycho or BND.

Finally, the Ant build language can express includes and imports of remote build files, and define extension points, macros and scripts. Used with care, these features allow for the design of compositional builds.

2.2 Eclipse-level Testing using AntHarness

If we are using Ant to integrate workflows, we will frequently create Ant Tasks to provide secondary functionality and hence we will be developing and debugging a lot of them. Ant Task classes themselves can be unit-tested with relative ease, but interactions with Eclipse’s facilities can be complex.

To ensure we can verify behaviour of components in the AntRunner, we have developed a JUnit5 Extension called AntHarness that allows to create declarative tests that wrap around the AntRunner using configuration in Java annotations. Test classes with AntHarness annotations will run Ant build scripts inside AntRunner and make the configuration and runner available after execution has finished; An Eclipse developer can then use JUnit’s assertion facilities to verify the outcomes. Since AntHarness is designed to run in the scope of Eclipse Plugin Tests, all of Eclipse’s debugging, logging and tracing support can be used with it, including attaching the debugger to the code of the Task under test or tracking an Eclipse Progress Monitor. Figure 3 shows the AntRunner facilities the harness exposes in a context diagram.

Figure 3: A context diagram for the AntHarness.

2.3 User-level Testing using Ant Unit

Java developers will be familiar with JUnit 5. For them using the AntHarness is effective. But the real target audience for us is users of MDE toolkits and workflows, not of the Java language. For them, testing must be easy and should only require a minimum of technological know-how. As we are designing this tooling for model-driven software developers, we think it is fair to assume that the users will have experience with some xUnit-style framework that uses naming conventions to find setups and tests and provides assertions to verify results.

Based on this, we integrated a solution to use Ant itself to perform integration tests. This allows to test build executions using Eclipse’s built-in Ant editor and debugger facilities, rather than the Java and Plugin Development environments.

The facility we have chosen is provided by the AntUnit framework. AntUnit transfers the ideas of JUnit to Ant: A regular Ant build script with conventionally named targets is executed dynamically in such a way that it prepares an environment, executes one or more test-case scripts, and validates the results. AntUnit can be combined with powerful model-driven testing facilities, such as Epsilon’s EUnit. This allows for combinatorial testing over models in the context of Ant-driven build flows.

1    <au:antunit>
2      <fileset dir="." includes="touch.xml"/>
3      <au:plainlistener/>
4    </au:antunit>
Listing 2: Invoking an AntUnit-based test from a build file.
1<project xmlns:au="antlib:org.apache.ant.antunit">
2  <!-- is called prior to the test -->
3  <target name="setUp">
4    <property name="foo" value="foo"/>
5  </target>
6
7  <!-- is called after the test, even if that caused an error -->
8  <target name="tearDown">
9    <delete file="${foo}" quiet="true"/>
10  </target>
11
12  <!-- the actual test case -->
13  <target name="testTouchCreatesFile">
14    <au:assertFileDoesntExist file="${foo}"/>
15    <touch file="${foo}"/>
16    <au:assertFileExists file="${foo}"/>
17  </target>
18</project>
Listing 3: An xUnit-like test expressed as an Ant Build file using AntUnit tasks.

2.4 Installing Ant Workflow Features using P2

The previous sections have covered how to execute workflows that use model-driven components of several stacks, but have not touched on how the various features required to produce a combined execution system need to be installed. In our approach, this task is performed by Eclipse’s own installation system: The Provisioning System 2 (P2). Inside the Eclipse workbench this tool is exposed as a set of user-interface widgets that allow to install new software from Eclipse installation websites. Eclipse features do not necessarily have to be related to the graphical UI. In our case they are simply packaging for Ant Tasks, Types and other facilities.

The regular installation procedure for Ant tasks in features makes them immediately available in an Eclipse RCP instance. In any Eclipse workbench a developer can install tasks and then start experimenting with model-driven-builds interactively.

There is a headless application whose functionality is equivalent to the graphical interactive installation in RCP known as the director which, provided with adequate parameters, performs the same installation. It behaves much like a package manager in the Unix operating system. It considers the installed base, the features the user intends to install, solves the dependency equation and provides an installation plan. It then creates a checkpoint and installs the new solution.

4 shows the command line invocation to install the set of plugins for C++ into an Eclipse instance in ~/eclipse/, downloading from the current release of the platform.

1eclipse \
2  -application org.eclipse.equinox.p2.director \
3  -repository http://download.eclipse.org/releases/latest/ \
4  -installIU org.eclipse.cdt.feature.group \
5  -destination ~/eclipse/ \
6  -profile SDKProfile
Listing 4: Adding C++ Developer Tools to an Eclipse installation using the director.

P2 director also allows to update existing instances. So all that is required for a workflow with new requirements is the basic system and the locations of the update sites of the additional components.

The P2 director is not able to set any installation state, though. The Eclipse Oomph Installer provides such functionality, but unfortunately, it is yet another tool integrated with the Eclipse UI and cannot be used in a headless environment at this time.

2.5 Creating portable Runtimes using Docker Containers

Installing Eclipse by hand on every system that needs to run workflows is a tedious process even with scripting provided by the director. It is also a source of inconsistencies and errors, as reproduction of the underlying environment is not easy and can have an effect due to native libraries and Java runtimes.

For this reason, we use Docker container images as lightweight encapsulations of existing installations. The build files that construct the container images - the Dockerfiles - also allow us to control and change versions rather easily. Like with all other software artifacts, we use CI to build these container images in a controlled fashion.

The storage of these container images in repositories allows us to forward the installations of prepared workflow runtimes to other users almost instantly. Most importantly, keeping the workflows in container images allows us to use them effectively for CI.

Using pre-packaged container images does not limit the individualized use of the containers. Any user can obtain a pre-prepared container image, create a container and enter it, call the director inside and change the setup in any way they care. They can also leave the container, commit the changes and forward the new variant to someone else for use. Containers thus greatly increase the flexibility of the build process.

2.6 Continuous Integration and DevOps with Gitlab

Model-driven workflows are often the first phase in a software construction process. They take in highly abstract data, and produce more language-specific and detailed software engineering artefacts. The toolchain that we have designed to drive our model-driven processes is specific to this purpose. It is not made to compile C++, work with SAP/ABAP or MuleSoft RAML, or to render websites as HTML. While these things could be done using Ant as a general-purpose tool, this approach would constitute a Golden Hammer fallacy. Ready-made and highly adapted toolchains for all these technical environments are available in off-the-shelf container images which are maintained centrally and professionally.

Our obligation is that the toolchain activates the corresponding build container images after the model-driven process has finished. We need a coordination layer to perform this and any following build steps. For this purpose we have chosen the Gitlab CI system.

2.6.1 Programming Ci using Multi-Technology Pipelines

Gitlab expresses build processes in a declarative syntax in a file that uses shell commands that are valid in the containerized environment that the respective build phase runs in. At the start of each build pipeline, source code is retrieved based on a git commit hash and additional derived software artefacts are subsequently pumped through the various build phases of the pipeline using a transparent built-in file exchange mechanism alongside the source code.

Every time a commit is made, a pipeline is started, tests are executed and the developer receives feedback. Figure 4 shows a complex multi-project pipeline visualisation. Execution time and computation cost are often significant, driving the design of build programs as first class artifacts and the segmentation of software into reusable artifacts. In this way, CI and DevOps drive software quality in a concrete, measurable fashion.

Figure 4: A detail visualisation of a complex Gitlab pipeline showing phases and jobs.

For our model-driven toolchains we invoke the AntRunner in a container installed with basic Eclipse and any model-driven tooling we require. The AntRunner executes the build file that is part of the source code checked out from the version control system. As a result, new source code files are created. As part of the Gitlab CI build file we specify where these files are to be found. Gitlab will then take these files and move them into the next container.

Let us assume that we have generated some Java files and a Maven-specific Project Object Model (POM) build descriptor file. We now add a build phase in the Gitlab build descriptor that requires a container that is installed with the Maven build system. We start the Maven build system with a shell command and it reads the POM descriptor file. As a result, a Java application is now being built in a conventional way. Any Java developer - without any knowledge of model-driven processes - can understand what is happening here. As a result, we now have a ready-to-run Java application. We can now use the workflow to store the output in a binary repository. We can also add another phase with a container that holds tools used for deploying to a cloud provider. This can be proprietary, if desired, or based on a standard like Kubernetes or OpenShift.

This example showed how we performed CI of a model-based artefact and followed it up with an immediate deployment.

2.6.2 Reviews, A/B Tests, Beta Channels, Incidents

The Gitlab CI system also provides implicit parameters and variables based on the build context that can be used to configure the CI program in a declarative manner. These variables include the commit hash, commit ref, committer id and the locations of binary repositories associated with the build.

These implicit variables allow to integrate binary repositories into the process easily. Gitlab provides package repositories and registries for various technologies. The build can upload binary artefacts to these package repositories.

Based on such parameters, it is now easy to deploy a version of the whole application for review: The pre-built binary that was produced and stored in the package registry in a previous phase can be retrieved from the registry and deployed. Developers can now review the application live and inspect the new feature.

With this setup many more DevOps practices that seem advanced and theoretical become practically attainable: For example, variants of an application can be provided as part of an A/B test. Beta versions can deployed for early adopters. An application’s behaviour can be be monitored and feedback attached in the form of tickets.

The processes described in the previous paragraphs seems to imply that model-driven processes always have to be forward-engineering based. However, we could also consider the case where an existing application is read into model artefacts by using a reverse engineering system like Modisco to produce documentation of how the system can be understood. A project can serve as a log for discovery and experimentation. It is an effective vehicle for research, because it captures source state and all circumstances of a build, producing precise evidence.

2.6.3 Managing Technological Spaces with Sub-Projects

MDE processes excel at producing large, detailed code-bases. However, these days it is rare for projects of any significant size to use a single implementation language and technological platform. It is much more likely that specialised languages for the various aspects of a platform will be used in conjunction. The MDE process will usually produce the software artefacts based on one set of consistent models in one directory, with derived artifacts for specific technologies residing in sub-directories.

However, in order to make use of the stereotypical builds for the various technology spaces it is useful to have projects that have a directory structure that conforms to the expectation of the language or technological space. Both developers and build processes in a technological space expect a git repository that contains the layout and commits for its technological concern, and nothing else. For example, the React front-end developer will not be interested that the model-driven system made a change to the underlying business model and persistence layers, if the server API stayed the same. From her point of view the git commit that announces that change is distracting noise in her git log.

For this reason, we will need to design the CI system in such a way that it keeps the sub-projects for different technological spaces coordinated in a safe way, but at the same time keeps commits to them separated. In our environment we have achieved this by writing a set of specific Tasks that handle the creation of git sub-modules which map to individual Gitlab projects. The top-level project that references the other modules describes changes that happen at the top level. The individual projects are used by individual developers of the technologies. They behave like regular source-code-based projects.

Our company’s existing solutions build full-stack applications based on several different technologies. Here, the sub-module management has direct application. In prior iterations, the build of these applications was complex and hence brittle. With the change to sub-modules this is much less complex.

As stated before, commits trigger builds in CI, so commits to the various sub-projects trigger corresponding builds in these sub-projects. This leads to the perception that sub-projects perform individual steps, when actually they are coordinated in a global way. Gitlab supports control of this by tracking and visualizing the relationship between builds.

This approach needs and leads to the stringent use of versioning. The various sub-projects generally do not use the latest version of another sub-project, but the latest verified version. This helps with stability and exposes API as a separate factor. It also helps to speed up selective builds of the sub-projects.

The management of versioning and tagging that is orchestrated by the sub-module tooling enables the use of binaries instead of souce code all the way down. With this approach results of a JavaScript build are stored in an NPM repository associated with the front-end project; Java project components will be deposited into a Maven repository associated with the Middleware project; and SQL packages will be stored within a general-purpose repository associated with the backend project. The Docker container images for these projects will be in one or more docker registries, and the Kubernetes deployment descriptor that references them will be in the helm project’s registry.

With this setup, all an operations engineer needs from the development group to deploy the complete modelled server application is the address of the helm chart that describes the overall system. The knowledge required for deployment has been encoded by the developers as a helm chart program. In fact, the operations engineer in advanced process probably does not exist. If a new helm chart is committed,

CD occurs.

2.6.4 Getting Interactive Input for the Workflow

In the previous sections we have focused on the use of our architecture to perform builds based exclusively on git committed file input. However, in MDE there are a lot of cases, where processes can benefit from input from a user. This is reflected in the fact that most Model-driven kits like Epsilon have synchronous input facilities.

After all, domain expertise is one of the most valuable inputs into model-driven processes. Forcing a domain expert to learn the technicalities of a CI/CD system only to interact and provide information into a model-driven toolchain would imply very poor appreciation of the experts value. Engagement with the technology would likely be low.

Even current development approaches do not require software engineers to learn details of interaction with the CI/CD servers and services. Instead today we expect that build processes behave conversationally with those involved in what has become known as Chat-Ops. We expect that in our day-to-day chat interfaces the build process behaves like a participant. For example, we expect the build server to talk to us on a messaging channel (such as Mattermost or Slack) and inform us that a build has started, a new build phase has been entered, a bug has been found, or an artefact has been produced. The messages would depend on the needs of the communication participants. Likewise, we would expect the build system to also be able to receive feedback from the participants in the communication channel. For example, we would expect that the system would take note of a new development issue to be pursued by the team. We would expected it to respond to a team member typing something like this: "/issue Enlarge the font on the home screen." asking questions pertinent to complete the capture of the issue.

In our company we have taken this idea of Chat-Ops one step further and created Model-Ops. In this case an input facility provided by Ant is connected to an arbitrary textual chat system. The system asks questions formatted in markdown syntax with references to domain items, and the build continues when it has captured a response from the team. The questions can be quite rich, containing full-format text, cross-references, tables, images and anything else you would expect in a post.

For example a reverse-engineering toolchain could ask something like: "Is HybridAdapterExtension.java a Java Bean? [y/n]", while hyperlinking the file to the git repository browser. The answer fills the knowledge gap and produces the correct data. The requesting process can now capture this information for later use.

Given that this interface can be attached to RCS/SMS text messages, Facebook messenger, Slack, Mattermost, IRC or Email it has a lot of reach for very little implementation effort. This approach can also be used in an asynchronous fashion where facilities ask a question in advance to return to a particular part of work once an answer becomes available.

2.7 Using Kubernetes for Scaling and Online Service

In our specific setup Gitlab and all associated container deployments are already hosted in Kubernetes on the cloud in Azure, and this covers the regular DevOps requirements in terms of using MDE workflows in a pipeline. However, Kubernetes also offers separate opportunities to expand and enhance the functionality of model-driven toolchains. As part of the following section we describe other prospects that Kubernetes can offer to MDE.

3 Outlook and Future Work

While our current work is geared towards establishing a basis for MDE workflows to harness model-driven processes as software engineering tools, the real interest is in exploring how we offer model-driven facilities to others. There are three areas that we are interested in: How online-applications that are offering flexible model-driven services to users can be constructed (subsection 3.1), how we can support the construction of tools that use models to simplify software implementation and maintenance (subsection 3.2), and how modelling can be used to support DevOps itself (subsection 3.3).

3.1 Applying Kubernetes for Cloud-based Mde

Instead of deploying a separate service execution for every run of a model-driven workflow, the workflow could be embedded into a persistently running server as part of an online service. Such services could then back interactive IDE’s hosted on the cloud. We have noted previously that model-driven technologies are often employing reflective and interpreted languages and are consequently quite slow. For an interactive application this would be an impediment to adoption.

The usual response to slow interpretation is to devise a compiler to create program variations that perform the same task, but faster. Given that we are reusing existing MDE components in our workflow that we are not maintaining ourselves, this is not an option. The only other avenue to increasing performance is parallelisation. Parallelisation in our approach can be achieved at different levels: At the level of the workflow by defining sections that can run in parallel, or at the level of whole build executions. The first option is again affected by the design of most Task implementations that we will be reusing: They are generally not thread-safe.

This leaves us with the option of performing parallelisation on the level of whole build process executions. For this purpose, we will aim to combine a variant of the Ant and Subant Tasks with the Kubernetes Java Client. This will allow a build to spawn other executions of the build as Kubernetes Jobs to run in parallel and to balance the load where this desired.

As we distribute the workloads, access to resources becomes an issue. In this scenario, we do not have Gitlab’s implicit file provision any more. The spawned jobs may require access to certain resources that live in the file system of the container image that spawned the new job. While there are good off-the-shelf solutions available for synchronising files and databases, sharing access to models in an organised manner is more challenging. We would assume that we would eventually implement tasks for our workflows that provide use of Connected Data Objects (CDO) for versioned and branched target data, or Hawk for file-based persistency, graph data or other clients.

Viewed from a greater distance, such application assemblies that operate on models effectively move modelling from our local environments into the cloud. They constitute real ’Cloud Modelling’. We intend to focus some of our research in this area to see what patterns are required to reuse existing components to assemble model-driven cloud-based applications.

3.2 Teaching Bots to Code

Evolved assistance in the creation of source code is getting traction in practice. The Copilot and Tabnine services foreshadow that context-aware coding support will be a normality in the future. Given that much of the creation of source code generating MDE templates is fairly repetitive, it should be feasible to create solutions that perform the bulk of abstraction work in providing coding support. Combined with the infrastructure for DevOps and Model-Ops that BB8 offers and the prospect of parallelising work described in the previous section, an online service that supports a modelling engineer could be devised. We are currently focusing our research in this area.

3.3 Using Mde to Support DevOps

DevOps is the practice of encoding processes for software development and infrastructure. The flexibility that is required implies that the programming of these toolchains offers control of operational details. However, as these DevOps programs evolve, they encounter the same issues as any other software. Gitlab for example began its CI language as a simple YAML specification file. Today a dedicated parser manages inclusions, extensions, Around-advice, dependency relationships, parallel matrix operations and many other features. The resulting complexity is so substantial that an interactive debugging shell is now part of the system. With complexity and scale, models are becoming interesting to DevOps engineers. Our company is looking at how the abstracting power of models can be used to support DevOps.

4 Conclusion

Building an MDE system that supports Agile practices and allows implementation of DevOps is possible and provides a sound functional base for industrial application. Our implementation includes means for testing, installation, flexible deployment and workflow coordination, including limited interactivity, while considering the roles of stakeholders in the process. While our design is limited by the implications of EMF as an Eclipse component, even the basic implementation provides a surprising amount of latitude.

We found that via aligning to the Ant build system we can combine many of the current MDE tools found in the Eclipse ecosystem. Even though at times we had to work around some tools because they were not designed with DevOps in mind, we found some positive results with this approach. We encourage the MDE community to consider DevOps toolchains as one of the Grand Challenges of MDE adoption in industry.

The experience we have gained also raises new questions with regard to increasing speed and performance of flexible synchronous MDE services, automation of aspects of the construction of MDE tooling and the application of MDE approaches to DevOps languages as a subject.

MDE
Model-Driven Engineering
DevOps
Development and Operations
CI
Continuous Integration
CD
Continuous Deployment
EMF
Eclipse Modelling Framework
MOF
Meta-Object Facility
CORBA
Common Object Request Broker Architecture
RCP
Rich Content Platform
UI
User Interface
PDE
Plugin Development Environment
P2
Provisioning System 2

References