Learning to Centralize Dual-Arm Assembly

10/08/2021 ∙ by Marvin Alles, et al. ∙ Technische Universität München 0

Even though industrial manipulators are widely used in modern manufacturing processes, deployment in unstructured environments remains an open problem. To deal with variety, complexity and uncertainty of real world manipulation tasks a general framework is essential. In this work we want to focus on assembly with humanoid robots by providing a framework for dual-arm peg-in-hole manipulation. As we aim to contribute towards an approach which is not limited to dual-arm peg-in-hole, but dual-arm manipulation in general, we keep modeling effort at a minimum. While reinforcement learning has shown great results for single-arm robotic manipulation in recent years, research focusing on dual-arm manipulation is still rare. Solving such tasks often involves complex modeling of interaction between two manipulators and their coupling at a control level. In this paper, we explore the applicability of model-free reinforcement learning to dual-arm manipulation based on a modular approach with two decentralized single-arm controllers and a single centralized policy. We reduce modeling effort to a minimum by using sparse rewards only. We demonstrate the effectiveness of the framework on dual-arm peg-in-hole and analyze sample efficiency and success rates for different action spaces. Moreover, we compare results on different clearances and showcase disturbance recovery and robustness, when dealing with position uncertainties. Finally we zero-shot transfer policies trained in simulation to the real-world and evaluate their performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years robotic manipulation has been an active field of research, nevertheless work focusing on dual-arm manipulation is still rare. A second robotic arm enhances dexterity but also introduces new challenges and additional modeling effort as further degrees of freedom and interaction between manipulators need to be considered. Thus, it is common to use a complex task-specific control structure with multiple control loops. However, as our goal is to contribute towards a general framework for dual-arm manipulation, the method needs to be task-independent and modeling efforts should be restricted to a minimum.

Simulation
Real-world
Fig. 1: Simulation-to-real transfer: The policy is trained in simulation to perform dual-arm peg-in-hole, and transferred to the real-world without additional training.

To this end, it is possible to utilize deep reinforcement learning (RL). Thereby manipulation tasks can be learned from scratch by interaction with the environment. However, deep RL alone would require a lot of training samples which are expensive to collect in a real-world setup. Instead, it is preferable to introduce inductive biases into our architecture as to facilitate the learning process. Namely, we train a policy network only to generate high-level trajectories and use well-established control techniques to track those trajectories. Such a modular architecture also allows for zero-shot sim-to-real transfer. This enables us to do all the training in simulation. In general the distinction between decentralization and centralization has to be made on both control level and policy level. On a control level centralized control requires large modeling efforts and is not task-agnostic, our work considers a decentralized approach. With that in mind, two general paradigms are conceivable: the first one involves two separate decoupled RL agents which can be trained in a multi-agent RL setting and the second one utilizes a single policy controlling both arms. The latter is more feasible as it couples control of both manipulators through a policy network, resulting in an overall centralized method, and, thus increases precision and efficiency. Our method is based on the latter approach and attempts to learn a single policy using off-policy RL. Intuitively, such an approach can be thought of as a way to centralize decentralized control based on RL. This paper aims at exploring the applicability of deep RL to dual-arm manipulation tasks. Hence, we propose a framework to learn such tasks based on a combination of recent advances in RL and well-established control techniques. To reduce the need for task-specific knowledge and to avoid introducing additional bias in the learning process, we test our methods solely with sparse rewards. Nevertheless, only receiving a reward after successfully solving the task provides less guidance to the agent as no intermediate feedback is given. Thus, the already challenging task of dual-arm manipulation becomes more complicated and sample-inefficient. That is why we rely on simulation to train our policy and transfer the results to the real-world (fig. 1). Moreover, we design our framework with the goal of avoiding elaborate sim-to-real transfer procedures.

To demonstrate the effectiveness of our approach we evaluate our method on a dual-arm peg-in-hole task, as it requires high dexterity to manipulate two objects with small clearances under consideration of contact forces. We first use PyBullet [8] to create a realistic simulation environment and analyze the proposed approach with a focus on sample efficiency, performance and robustness. We then test the learned behavior in a real-world setup with two Franka Emika Panda [9] robots and demonstrate the feasibility of our method under minimized sim-to-real transfer efforts. Our contributions can be summarized as follows:

  • We explore and formulate a new paradigm for learning dual-arm manipulation tasks.

  • We compare the performance of different action spaces and controllers on the success and robustness of the learned policies.

  • We show the feasibility of zero-shot transferring policies trained in simulation to the real-world.

  • To our knowledge, our work is the first to explore the applicability of model-free RL to contact-rich dual-arm manipulation tasks.

Fig. 2: System overview: Both diagrams show the interaction between policy, controller, robots and the environment. As both robotic arms need to interact with each other, their overall control system needs to be coupled somehow. (a) The common way of having a centralized control strategy which couples both robots. (b) The proposed framework, which uses two decentralized controllers and takes care of the coupling at the policy level.

Ii Related Work

Dual-arm manipulation is a challenging area of research, which can be divided into decentralized and centralized approaches. The first one utilizes independent controllers for each robot with explicit [25] or implicit [33, 30] communication channels and is often combined with leader/follower behaviour [28, 33]. Besides improved scalability and variability, decentralized control hardly reaches the efficiency and precision of centralized control, which integrates the control of both manipulators in a central unit.

Among feasible manipulation objectives, peg-in-hole insertion can be seen as a benchmark, since it requires accurate positioning, grasping, and handling of objects in contact-rich situations. Therefore, we select the task of dual-arm peg-in-hole to evaluate the performance of our approach.

Single-arm peg-in-hole. As research focusing on dual-arm peg-in-hole assembly is rare and mostly limited to extensive modeling [28, 22, 34], research on classical peg-in-hole assembly with a single robotic arm provides a perspective on model-free approaches based on reinforcement learning. [32] and [27] show that sparse rewards are sufficient to successfully learn a policy for an insertion task if combined with learning from demonstrations. In [35] a similar setup is combined with the concept of Hindsight Experience Replay (HER) [2] for the robotic manipulation tasks push as well as pick-and-place. Their results point out, that HER is sufficient to enhance the performance if only sparse rewards are available. The work in [27] utilizes residual reinforcement learning to leverage classical control, which performs well given spare rewards only and provides a hint, that the choice of action space can be crucial. An evaluation of action spaces on the task of single-arm peg-in-hole with a clearance of and a shaped reward function is presented in [31], where cartesian impedance control performs best. Moreover, [3]

applies position-force control with model-free reinforcement learning for peg-in-hole with a focus on transfer-learning and domain randomization.

Decentralized Dual-Arm Manipulation. In the work by [28], a decentralized approach for the dual-arm peg-in-hole task is proposed. The method is based on a leader/follower architecture. Hence, no explicit coupling between both manipulators is required. The leader would perform the insertion, and the follower would hold its position and be compliant to the applied forces. Similar to the previously mentioned work, [34] utilizes a decentralized approach, where the hole keeps the desired position with low compliance and the peg is steered in a spiral-screw motion towards insertion with high compliance. But despite reducing the necessity to model their interaction, both approaches lack dexterity, i.e., there is only one robot actively acting in the environment. In a general pipeline, there should be enough flexibility for both arms to be actively contributing towards the objective. Furthermore, [22] presents a method based on decomposing the task into phases and utilizing a sophisticated control flow for the whole assembly process. Despite reducing efforts in modeling the interaction explicitly, the control flow is only engineered for one specific task and lacks dexterity as movements are bound to the pre-programmed procedure.

Centralized Dual-Arm Manipulation. Pairet et al. proposes a centralized approach for dual-arm manipulation, which formulates an accurate model of the objective [21]. A set of primitive behaviors are demonstrated to the robot by a human, the robot combines those behaviors and tries to solve a given task. Finally, an evaluator measures its performance and decides if further demonstrations are necessary. The approach has promising potential towards more robust and less task-specific dual-arm manipulation. However, besides the required modeling efforts, it is limited by the human teaching process, which introduces a different set of assumptions on top of the previous ones, limiting its applicability to semi-structured environments. Besides that, work on centralized methods, mostly focuses on cooperative object manipulation [5, 6, 11, 14, 26, 4] and highly relies on accurate and complex modeling of the underlying system dynamics.

Sim-to-Real Transfer. Sample inefficiency is one of the main challenges of deep RL algorithms. The problem is even worse for robotic tasks, which involve high-dimensional states and actions as well as complex dynamics. This motivates the use of simulation for data collection and training. However, due to the inaccuracies in the physics modeling and image rendering in simulation, policies trained in simulation tend to fail in the real-world. This is usually referred to as the ”reality gap”. The most popular paradigm to approach this problem is domain randomization (DR) [29]. The main goal of DR is to subject the agent to samples based on diverse simulation parameters concerning the object [29] and the dynamics properties [24]. By doing so the learned policy should be able to generalize to the different physical properties of real-world tasks. Recent work has explored active parameters sampling strategies as to dedicate more training time for troublesome parameter settings [19]. Another approach for sim-to-real transfer is system modularity [7]

. Here a policy is split into different modules responsible for different objectives such as pose detection, online motion planning and control. Only components that won’t suffer from the reality gap are trained in simulation and the rest is adapted or tailor-made for the real-world setup. This comes in contrast to the most common end-to-end training in deep RL 

[16]. In our work, we use a modular architecture to enable zero-shot sim-to-real transfer. Namely, we parameterize the controllers differently in simulation compared to the real-world to allow using the same high-level policy network.

Despite various contributions towards a general framework for dual-arm manipulation, we do not know of any work that successfully fulfills all requirements. Therefore, this work aims at proposing a unified pipeline for dual-arm manipulation based on a minimal set of assumptions. To the best of our knowledge, no publication exists which solves the challenge of contact-rich dual-arm peg-in-hole with model-free reinforcement learning, nor sparse rewards.

Iii Learning to Centralize

In this section, we introduce an approach to learn a centralized policy for dual-arm manipulation. Moreover, we intend to reduce the required modeling effort to a minimum as the proposed method is based on model-free reinforcement learning with sparse rewards. The approach does not require a specific dual-arm controller, since control is only coupled at a policy level.

Iii-a Controller

The common approach in centralized dual-arm control strategies is to model the manipulator’s and the object’s dynamics explicitly and achieve coupling of both robotic arms by using multiple control loops. The system dynamics of the -th manipulator can be described as follows:

(1)
(2)

Where

is the vector of joint positions,

is the inertia matrix, is the coriolis/centrifugal matrix, is gravity vector, is the vector of joint torques and represents external torques, which can be further decomposed into the external wrench as in (2).

As both robots interact with each other directly or through the manipulation of objects, the respective external wrenches have to include all forces and moments which are applied to the robot. Hence, to create a valid dynamic chain, commonly (

1) is also used to define the object dynamics: With as torques exerted by the manipulators or other objects on the respective object. Thereby, the dynamic equations need to be adapted to the specific manipulation task, but are not universally valid. Furthermore, the end-effector movements are restricted by object geometries, which need to be known in advance. Based on the dynamics model, commonly a hierarchical control strategy through multiple loops is applied, where the outer loops realizes the main objective such as desired object movements and the inner loop accounts to generate a firm grasp and bounded internal forces. [5, 6, 11, 14, 26, 4]. The particular control loops can utilize any control strategy. Nevertheless, impedance control is a common choice to achieve compliant behaviour [5, 6, 14, 26], as contact forces are limited by coupling the control torque with position and velocity  (3). The control torque is calculated by multiplying the gains and with the difference of desired and actual position and velocity respectively. The principal can be applied in joint space or task space. and are placeholders to account for the variety of control laws and their additions (e.g. 5).

(3)

Besides, the need to define an explicit set of dynamic equations and constraints for each task, also the control loops are adapted for a specific use case. Hence, a general framework for dual-arm robotic manipulation is not feasible.

An alternative way to approach the problem is to utilize a policy network as high-level control and thereby dispose the need of designing a coupled control method. Figure 2 highlights the differences between those paradigms for dual-arm manipulation: The classical way to have a unified control scheme, which can include multiple control loops, to couple both robot arms (figure 2), and the proposed method, which uses two standard controllers with a high-level policy instead (figure 2). Thereby the controllers can be designed in a straightforward way without the need for purpose-built dual-arm control algorithms, allowing the use of any single-arm action space. In all cases, the framework uses a policy trained by RL to provide an action input to the controllers. The policy learns to inherently compensate for the constraints resulting from dual-arm interactions. Furthermore, coupling at policy level is convenient to implement as the policies action space can simply be extended to include a second controller.

Iii-B Action space

Classical control approaches for manipulation are often based on impedance control, as it comes with the previously mentioned advantages. However, since our method tries to compensate for interaction forces at a trajectory level, we explore different control laws as action spaces and study their effect on the task success.
Joint position control. First of all, we use joint position control (equation 4) to compute a torque command: Both gains, and , are set to a constant value, and evaluated at run-time and inferred by the policy.

(4)

Cartesian impedance control. Second, we implement cartesian impedance control [20]: The action space allows to move control from joint space to cartesian space and includes model information such as the cartesian inertia matrix and the jacobian matrix as well as the gravity compensation term . As the degrees of freedom exceed the number of joints, nullspace compensation is added. Instead of , is directly passed in as action input.

(5)

Variable cartesian impedance control. Variable cartesian impedance control [18] is based on classical cartesian impedance control, though adds to the action space making control more variable to react with higher or lower stiffness if needed. We use anisotropic gains and couple velocity gains via to achieve critical damping.

Iii-C Reinforcement Learning.

In our method, the policy is responsible for generating high-level trajectories, which are later-on tracked by the chosen controller. We train the policy network using model-free RL. The policy receives the robot states to infer a control signal (action) for the above mentioned control laws (action spaces). We combine joint positions , joint velocities and joint torques of both robotic arms respectively, as well as cartesian positions and orientations of the end-effectors as state input . Nevertheless, the state might need to be adjusted if the framework is applied for a different task, which for instance includes additional objects.

The proposed method is not restricted to a specific model-free RL algorithm, though an off-policy algorithm is desirable to facilitate high sample efficiency and allow the use of experience replay. Thus, we use Soft Actor-Critic (SAC) [13] as the algorithm presents state of the art performance and sample efficiency, but could potentially be replaced by others such as Deep Deterministic Policy Gradients (DDPG) [17] or Twin Delayed Deep Deterministic Policy Gradients (TD3) [12].

To enhance sample efficiency, the environments should be implemented in a goal-based way. Thereby, the achieved goal is returned alongside the environment state and can easily be compared to the desired goal to compute a reward . Reward engineering is not necessary as we use a sparse reward (6).

(6)

To counteract the more challenging training task when using sparse compared to dense rewards, we use HER to augment past experiences. By replaying experiences with goals which have been or will be achieved along a trajectory the agent shall generalize a goal reaching behaviour. Hence, unsuccessful experiences still help to guide an agent, as a sparse reward otherwise does not provide feedback on the closeness to the desired goal.

clearance
clearance
Fig. 3: Training results in simulation using different action spaces (joint position control, cartesian impedance control and variable cartesian impedance control) on the task of dual-arm peg-in-hole assembly separated for different clearances.

Iii-D Implementation

We implement the general training and evaluation procedure in the following way: During each epoch, one full episode is gathered by interacting with the environment followed by

optimization steps. Moreover, we calculate the success rate every fifth epoch by averaging over test cycles. We use ADAM [15] as optimizer with a learning rate of and a batch size of . The experience replay memory size is set to and training starts after storing samples. The q-network and policy-network consist of and linear layers respectively with a hidden dimension of

and ReLU 

[1]activation functions. To update the target networks, an updating factor of is chosen. HER is set to use the strategy ”future” with sampling of additional experiences [2]. All hyper-parameters are tuned manually and kept fixed for all experiments.

Iv Experimental Setup

We design experiments to answer the following questions: (i) can a central policy successfully learn dual-arm manipulations skills based on a decentralized control architecture? (ii) what action space leads to the highest success rate and the most robust policies? (iii) is our method robust against disturbances and position uncertainty? (iv) is a modular design enough to zero-shot transfer policies from simulation to the real-world?

To answer these questions, we evaluate the proposed method on the task of peg-in-hole assembly with two Franka Emika panda manipulators [9] both in simulation (fig. 1) and in real-world (fig. 1). The simulation environment is created using PyBullet [8]. We design it to match the real-world setup as closely as possible. Both setups are similar except for the environment frequency which is 240Hz in simulation and 1KHz in the real-world. The policy is operating at 60Hz. Nevertheless, the robots only obtain the last state for control. To exclude the process of gripping and restrict movements of peg and hole, both are fixed to the end-effector of the respective robot arm. In the real-world experiments, this corresponds to the peg and hole being attached to the gripper of each manipulator. Furthermore, to enable an evaluation with increasing difficulty, pairs of peg and hole have been created with a clearance of and . Moreover, we define the goal state as relative distance between the positions of peg and hole. Both robots start with a significant distance and varying orientation with an offset around the initial joint position of . We restrict robot movements by their joint limits [10], whereas the workspace is not bounded. Furthermore, the robot bases are positioned on a table with a distance of

and both oriented to the same side. We use PyTorch 

[23] to implement the models and train them on a NVIDIA GeForce RTX 2080 GPU.

V Simulation Results

We use the simulation to first train the policy and also to perform ablation studies and robustness tests. As can be seen in the supplementary video, the policy can be trained in simulation to learn a successful peg-in-hole insertion strategy. Both manipulators move equally towards each other without any bigger diversion. Furthermore, both are evenly involved in the process of aligning the end-effectors and pushing the peg inside. Hence, the approach does not lead to any leader/follower behaviour where one end-effector just keeps its position similar to a single-arm solution.

We design the experiments to start with an offset of from the initial joint positions and average all results over 4 seeds. All successful runs up to a clearance of converge to a success rate above in-between and epochs (fig. 3). As we use sparse rewards, no intermediate information about the task success is available. Hence, the success rates of a single run tend to converge either to or stay at , what can be easily seen for cartesian impedance control and a clearance of

, where only 1 out of 4 runs converges in the given time frame. Overall, variance and training duration is increasing with smaller clearances, which confirms that the task becomes more challenging as clearances decrease.

Fig. 4: Robustness when using different action spaces (joint position control, cartesian impedance control and variable cartesian impedance control) on the task of dual-arm peg-in-hole assembly in simulation separated for different clearances. (a) Average episode duration’s when applying a random disturbance to the peg during a fixed time frame in the beginning of each episode. (b) Average success rates when applying a random offset to the peg.

V-a Ablation Studies

We compare the following control variants to study the effect of different action spaces: joint position control, cartesian impedance control and variable cartesian impedance control as introduced in section III-B. Figure 3 shows the results when using a clearance of , where policies trained in all three action spaces converge in a comparable manner. Moreover, to analyze the effect of smaller clearances, we conduct the same experiments using a clearance between peg and hole (fig. 3). However, only joint position control converges in the respective time frame. Overall, the results are different to [31], where cartesian impedance control converges faster than joint position control for single-arm peg-in-hole, and [18], where variable cartesian impedance control performs best in contact-rich manipulation tasks.

As peg-in-hole manipulation requires stiffness adaption, variable impedance control should perform best among the evaluated action spaces. But, as it does not, a couple of improvements are conceivable: State representation and hyper-parameters could be optimized, as they have been set fixed during all experiments. Moreover, the increased size of the action space makes learning tasks more challenging, but could be decreased by introducing isotropic gains. In the case of cartesian impedance control, the first two possible improvements apply as well. Furthermore, the stiffness values are not tuned and leave lots of room for improvements.

V-B Robustness

First we showcase the robustness, by evaluating disturbance recovery, and second, we demonstrate the robustness against positioning errors.

Disturbance recovery. To investigate the robustness to unforeseen events, such as collision or active manipulation by humans, we evaluate the success rate after being disturbed from time step 10 till 40, resulting in an end-effector offset. Figure 4 shows the results. Each episode duration, with a maximum of 400 steps, is averaged over 60 random disturbances, 3 test cycles and all seeds. Afterwards, we compare their trajectories to a reference and calculate the disturbance as the difference between end-effector positions. Comparing all action spaces, it turns out that in our framework all variants can recover from external disturbances. Joint position control yields faster success, but episode duration’s increase proportionately more with higher external disturbances. Overall, the ability to recover depends mostly on the available time frame, hence increasing the maximum time steps could allow to handle larger disturbance offsets. Figure 5 visualizes trajectories of desired joint positions given by the policy network, when using joint position control. Two time steps after the external disturbance is applied, the policy starts to adapt to the external influence, varying on the disturbance magnitude. The exemplary results show, that the policy network is actively reacting to the disturbance and not purely relying on the controller.

Fig. 5: Desired joint positions as inferred by the policy network (joint position control) after applying a random external disturbance.

Position uncertainties. Furthermore, to evaluate the performance under position uncertainties, for instance, caused by grasping the peg in an earlier phase, we apply a random offset to the relative location of the peg to the end-effector. Figure 4 shows the success rates for three offsets. We average each result over 50 different offsets and all seeds. In general, the success rates decrease with higher offsets and smaller clearances. Cartesian impedance control turns out to be less robust compared to joint position control and variable cartesian impedance control ends up last, which is comparable to the previous robustness evaluation of disturbance recovery. Nevertheless, joint position control and cartesian impedance control are capable to handle large offsets up to with high success rates, which should already be sufficient for most applications and is significant considering that no randomization has been applied during training. The evaluation under positional uncertainties shows, that the policy does not simply learn the underlying kinematics since peg and hole positions are fixed during training, but infers a peg insertion behaviour on the underlying state information.

V-C Real-World Results

To evaluate the approach in the real-world (fig. 1), we transfer the policies trained in simulation without taking further measures to improve transferability such as domain randomization or domain adaption. We explicitly evaluate the performance without any additional steps targeting sim-to-real transfer, to precisely investigate if the policy is robust enough to be applied in reality and both decentralized controllers can compensate to bridge the gap between simulation and reality.
To enable zero-shot transfer of the approach, the simulation environment has been adapted to match the real-world as close as possible. However, to ensure safety in real-world experiments, additional torque, velocity and stiffness limitations need to be applied to guarantee observability and non-critical behaviour, thereby even further increasing the sim-to-reality gap. For all experiments we use peg and hole with a clearance of , since all action spaces have been successfully trained in simulation when using that clearance.

Fig. 6: Success rates when transferring a policy from simulation to reality with a random deviation of from the initial joint positions .

Figure 6 shows the success rates for each action space when starting with a random deviation of for each joint from the initial joint positions . Using cartesian impedance control leads by far to the highest success rate as the action space helps to compensate for the sim-to-reality gap and is robust to the applied parameter changes for safe operation. Variable impedance control confirms the results of previous robustness evaluations as the variant can not reach a high success rate in real-world evaluations. One reason for the performance decrease might be that the variable stiffness led to overfitting to the system dynamics in simulation instead of learning a generalized stiffness adaption. Joint position control, which performed best in simulation, is not able to keep up in reality at all. The action space is not robust to joint torque and joint velocity limitations, thus would require additional retraining using the applied limitations. Overall, the results show that a well-chosen action space can help to enhance robustness and transfer the approach from simulation to reality without applying further methods to target sim-to-real transfer. Moreover, the modular design helped to incorporate adaptions after training the policy, which would not have been possible in an end-to-end approach. Nevertheless, the proposed method leaves room for improvements: Among them, the impact of domain randomization and domain adaption should be explored in future as well as fine-tuning in the real-world to adapt to the additionally applied safety constraints.

Vi Conclusion

We introduce a framework for dual-arm assembly with the goal to compensate for constraint and interaction modeling of traditional centralized control. The approach explores the applicability of reinforcement learning by utilizing a policy network to couple decentralized control of both robotic arms without any explicit modeling of their interaction. The policy is trained through model-free reinforcement learning and can be combined with various well-established single-arm controllers. As we aim to explore a framework with a minimal set of task-specific assumptions we only use sparse rewards. Moreover, sample efficiency needs to be enhanced for higher precision tasks such as peg-in-hole with smaller clearances, therefore we plan to further optimize exploration, experience replay and action spaces in the future. We evaluate the approach in simulation on the task of dual-arm peg-in-hole and show that joint position control provides good results up to a investigated clearances of . Furthermore, we point out that in simulation the approach can recover from external disturbances and proof, that the method learns a general peg insertion behaviour by evaluating position uncertainties. Lastly, we zero-shot transfer the policy trained in simulation to the real-world and show that a well-chosen action space can help to compensate for the sim-to-reality gap. The framework can be seen as a first step in the direction of reducing modeling efforts for centralized dual-arm manipulation and leaves lots of room for further research including the investigation of methods to improve sim-to-reality transferability and the evaluation of further dual-arm manipulation tasks.

Acknowledgment

We greatly acknowledge the funding of this work by Microsoft Germany, the Alfried Krupp von Bohlen und Halbach Foundation and project KoBo34 (project number V5ARA202) by the BMBF (grant no. 16SV7985).
We would like to thank Carlos Magno C. O. Valle, Luis F.C. Figueredo, Konstantin Ritt, Maximilian Ulmer and Sami Haddadin for their general support and comments on this work.

References