Log In Sign Up

Cyber-Physical Mobility Lab An Open-Source Platform for Networked and Autonomous Vehicles

by   Maximilian Kloock, et al.

We introduce our Cyber-Physical Mobility Lab (CPM Lab), a development environment for networked and autonomous vehicles. It consists of 20 model-scale vehicles for experiments and a simulation environment. We show our four-layered architecture that enables the seamless use of the same software in simulations and in experiments without any adaptions. A Data Distribution Service (DDS) based middleware allows to adapt the number of vehicles during experiments in a seamless manner. Experiments with the 20 vehicles can be extended by unlimited additional simulated vehicles. Another layer is responsible for synchronizing all entities following a logical execution time approach. We pursue an open policy in the CPM Lab and will publish the entire code as well as construction plans online. Additionally, we will offer a remote-access to the CPM Lab using a web interface. The remote-access will be publicly available. The CPM Lab allows researchers as well as students from different disciplines to see their ideas develop into reality.


page 2

page 3

page 6


F1/10: An Open-Source Autonomous Cyber-Physical Platform

In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs...

Physical Computing for Materials Acceleration Platforms

A ”technology lottery” describes a research idea or technology succeedin...

Coordination of Autonomous Vehicles: Taxonomy and Survey

In the near future, our streets will be populated by myriads of autonomo...

Integration of Virtual Laboratories: A step towards enhancing E-learning Technology

Virtual laboratories are an essential part of E-learning because all the...

Developing Mobility and Traffic Visualization Applications for Connected Vehicles

This technical report is a catalog of two applications that have been en...

On the implementation of an ETSI MEC using open source solutions

For autonomous vehicles to be fully aware of its environment, it needs t...

Lab::Measurement - a portable and extensible framework for controlling lab equipment and conducting measurements

Lab::Measurement is a framework for test and measurement automatization ...

Supplementary material

A demonstration video of the CPM Lab is available at

The code, bill of materials and a construction tutorial will be published online by the end of 2020. The link will be provided in the final submission.

I Introduction

Testing algorithms for networked and autonomous vehicles is time-consuming and expensive. Full-scale tests of, e.g., decision-making methods require a test track. Tests on public roads may be not eligible. Nowadays, a safety driver has to be in each vehicle to monitor the movement of the vehicle and intervene if required. In addition, one vehicle is not enough to test and evaluate algorithms for networked vehicles. Therefore, multiple vehicles have to be acquired, which increases the cost and logistic overhead. Additionally, the vehicles’ software have to be compatible to each other and to the infrastructure, e.g., traffic light communications. As a result, many research institutes have one full-scale test vehicle, but only a few have multiple vehicles for tests of networked algorithms.

Because of the shortcomings of full-scale experiments, simulations are the most common way to evaluate algorithms for networked vehicles. Simulations enable concepts like rapid functional prototyping, since changes in the algorithms can be rapidly applied and the results can be seen online. However, simulations abstract from real-world behavior and some aspects may not be included. This results in a big gap between simulations and real-world experiments. In order to mitigate this big gap, we developed the CPM Lab. The CPM Lab is a testing platform for networked and autonomous vehicles. In the CPM Lab, we perform model-scale experiments for, e.g., networked decision-making algorithms. We simulate inaccuracies due to scale absence, e.g., positioning system inaccuracies, synchronization errors or communications problems. Hence, the CPM Lab reduces the gap between simulations and real-world full-scale experiments. Figure 1 illustrates the position of the CPM Lab in the development and testing process of networked and autonomous vehicles.

Fig. 1: An overview of the development process of simulations (left), CPM Lab experiments (middle) and real world experiments (right).

Many testbeds for model-scale autonomous vehicles exist at research institutes. They differ in many aspects, e.g., vehicle hardware, scale, cost, positioning system or communications. An overview of robots developed in the last decade that cost less than $300 is given in [23]. All robots in this overview are with slip-stick fowards motion, e.g., [28, 29], or differential wheeled robots, e.g., [27, 12, 6, 26, 18, 33, 23, 31]. Labs that include vehicles with Ackermann steering geometry are presented in, e.g., [17, 8, 11, 20, 10]. When model-scale vehicles are larger, they typically carry more onboard sensors, e.g., lidar sensors and cameras, and more computation power, but are more costly and need more space to operate, e.g., [25]. Communications between the vehicles include Bluetooth and WLAN.

In order to provide a testing platform that suits rapid functional prototyping approaches, we also provide a simulator of the CPM Lab and all its components using the same interfaces. This enables the seamless use of the same software in simulations and in experiments without any adaptions. The CPM Lab can test the networked system in a model-, software-, processor-, or hardware-in-the-loop scheme, referred to as X-in-the-Loop (XiL).

The remainder of the paper is structured as follows. Firstly, Section II gives a system overview of the CPM Lab containing all important modules. Section III shows the architecture and describes the interaction between all modules of the CPM Lab. Section IV introduces the operation of the CPM Lab as testing platform and Section V demonstrates the effectiveness of our logical execution time approach in a case study. Finally, Section VI concludes the paper.

Ii System Overview

Fig. 2: An overview of the CPM Lab.

Figure 2 shows a schematic overview of the CPM Lab. It consists of

  1. 20 model-scale vehicles,

  2. a camera for the indoor positioning system,

  3. external computation devices,

  4. a main computer to control and monitor experiments,

  5. a map containing the road structure, and

  6. a router for the communications.

The 1:18 scale vehicles have a length of 220 mm, a width of 107 mm and a height of 70 mm. The maximum speed is 3.7 m/s. Figure 3 shows one of the vehicles. The basis for the vehicles is the XRAY M18 Pro LiPo platform [34].

Fig. 3: A picture of a vehicle.

Figure 4 depicts the hardware architecture of a vehicle. We developed a Printed Circuit Board (PCB) that integrates the electronic components, see Figure 5. It consists of an Atmega 2560 microcontroller, a Raspberry Pi Zero W, odometer and IMU and a motor driver. The Atmega 2560 and the Raspberry Pi are used for computations that are described in the architecture in Section III. The odometer and IMU measure the speed, acceleration and yaw rate. The odometer is composed of three hall-effect sensors that measure the rotation of a diametrically polarized magnet attached to the motor shaft. The motor driver controls the motor voltage through pulse width modulation. We use servos to steer the vehicle. The 3500 mAh batteries allow for five hour runtime. A battery protection inhibits the battery of discharging below a threshold to prevent the battery from damage. We described in [30] the vehicle hardware in detail.

Fig. 4: Hardware architecture of a vehicle.
Fig. 5: The PCB of a vehicle.

We developed a vision-based Indoor Positioning System (IPS) that computes the poses of the vehicles. In order to keep the costs and computation requirements of the vehicles low, the poses are computed externally on the main computer. The IPS consists of a Basler acA2040 grayscale module camera that is mounted 3.3m above the track and LEDs attached to each vehicle. Each vehicle is equipped with four LEDs, see Figure 3. Three LEDs are used to determine the pose of the vehicle. In order to map poses to vehicles, the IPS also identifies the vehicles using the fourth LED. The identification LEDs flash in unique frequencies. The IPS identifies LEDs using a 50Hz stream from the camera. We set a low exposure time to have a high contrast of the LEDs to the ambient light. By this, the LED spots are clearly identified through image processing. The three outer LEDs build a non-equilateral triangle, see Figure 3. By this, the poses can be computed unambiguously. The poses are computed for the center of mass of the vehicles. Our IPS is described in [16] in more detail.

In order to enable rapid functional prototyping, we provide external computation entities to each vehicle. These external computation entities are Intel NUCs, equipped with i5 processors and 16 GB of RAM each.

We constructed the map to fit the vehicles’ dynamics and our space requirements. Due to the continuity of the change of the steering angle, the roadway should be two times continuous differentiable [19]. With respect to the maximum steering angle of the vehicles the maximum curvature of the road is limited. For space reasons, the map is limited to 4 m x 4.5 m. To keep the space requirements low, the lanes are narrow but fit to the width of the vehicles. The roads are for visualization only and are not detected by any mechanism of the CPM Lab. The digital representation of the map, nevertheless, is used, e.g., for decision-making.

Figure 6 shows the framework architecture of the CPM Lab. It follows the Sense, Plan, Act scheme, including infrastructure functionalities.


The infrastructure provides a database of scenarios, called scenario sever. Scenarios include mission plans and the simulation of non-automated traffic participants. The scenario data are stored in the map. The map is used as database at runtime and includes static data like the road network, dynamic data like the positions of traffic participants, and preview data of planning. In order to simulate real environments, the environment model can be affected by artificial errors and noise in different intensities, e.g., to simulate positioning errors or communications delays.


Each vehicle consists of an Inertial Measurment Unit (IMU) and an odometer. The camera externally computes the poses, i.e., positions and orientations of all vehicles and communicate them to all vehicles.


Planning consists of the modules coordination, decision-making, and verification. The coordination module determines the coupling of the vehicles for the decision-making. The decision-making consists of the submodules routing, behavior, trajectory, and control. The routing submodule plans the route from a start position to an end position. The behavior submodule plans the behavior of the vehicle and the trajectory submodule computes trajectories. Before the trajectories are applied on the vehicle, they are verified to ensure safety aspects, e.g., collision-freeness. Our work in [32] is an example of verification, while the work in [2] is an example of decision-making. The CPM Lab is able to execute the decision-making of multiple vehicles centralized, or distributed in a parallel, sequential, or hybrid manner.


Act consists of the decision-making submodule control and the physical actuators. The submodule contorl uses the planned trajectory as input and computes corresponding control inputs, i.e., motor voltage and steering angle. The resulting commands are executed by the motor driver and servo.

Fig. 6: Framework concept. Colors illustrate the logical affiliation. Grey, blue, green and yellow denote infrastructure, Sense, Plan and Act, respectively.

Iii Architecture

This section introduces the four-layered architecture of the CPM Lab. The CPM Lab is divided into High-Level Controller (HLC), Mid-Level Controller (MLC), Low-Level Controller (LLC), and middleware similar to the experimental setup of [3]. Figure 7 illustrates the architecture and the data exchanged between the different layers. The layers are as follows:

Fig. 7: Illustration of the mapping of the hardware architecture to the logical architecture.

Iii-a High-Level Controller (Hlc)

The HLCs are executed on the Intel NUCs. We provide a HLC for each vehicle. The HLCs, however, are not placed on the vehicles due to space and weight requirements. The HLCs are responsible for the modules coordination, decision-making and verification, see Figure 6. The HLCs send trajectories to the MLCs and receive the fused poses of the vehicles from the MLCs. Depending on the vehicles’ couplings, HLCs exchange data for cooperation.

Iii-B Mid-Level Controller (Mlc)

The MLCs are executed on the Raspberry Pis which are mounted on the vehicles. The MLCs provide two modes of operation: direct control and trajectory following. In direct control, the MLCs receive commands of torque and steering angle from the HLC. In trajectory following, the MLC receives trajectory nodes of the form , where represents the time at which vehicle should be at position with velocity in x and y direction, respectively. The continuous reference trajectory

is constructed using Cubic Hermite spline interpolation, which interpolates between the trajectory nodes. The use of Hermite interpolation allows the addition of trajectory nodes in real time without affecting the interpolation between previous nodes. The

MLCs implement trajectory following controllers based on Model Predictive Control (MPC). The MLCs perform sensor fusions and use the fused poses of the on-board odometers and IMUs and from the IPS via wireless communications. The computed torques and steering angles are communicated to the LLCs.

Iii-C Low-Level Controller (Llc)

The LLCs are executed on the ATmega 2560 microcontrollers on the vehicles. They acts as a hardware abstraction layer. The LLCs sample the on-board sensors, converts the sensor signals into data compatible to the MLCs and send the sensor data to the MLCs. The LLCs apply the torque and steering angle given by the MLCs to the actuators. The LLCs convert the control inputs into signals compatible with the vehicles’ hardware.

Iii-D Middleware

The middleware runs on the NUCs and on the Raspberry Pis on the vehicles and synchronizes the HLCs and MLCs. The middleware performs the communications between the HLCs and MLCs. To meet a logical execution time [9], we use two time stamps. The first timestamp represents the time of creation of the data. The second timestamp, i.e., the valid-after timestamp defines the time after which the data are valid, i.e., the time for which the computations were executed. The middleware communicates the newest valid data between MLCs and HLCs.

Fig. 8: Middleware synchronization algorithm flowchart. The blue tasks are done by the middleware and the yellow task is done by the HLC.

Figure 8 illustrates the middleware procedure in a flowchart. The blue steps are performed by the middleware and the yellow step is performed by the HLCs. The HLCs receive an initial start time of the experiment and is initialized with a period length , i.e., the maximum computation time of the HLCs and an offset used for sequential planning. The deadline for each computation of the HLCs is defined as . When the real-time clock of of the NUCs exceeds the time , the middleware checks if the HLCs inputs, i.e., data of the vehicles’ poses are new. If the HLCs inputs are outdated, an error is logged. The HLCs are triggered to start the decision-making module. After the HLCs finish their computations, the middleware checks if the deadline was missed and sends the results to the MLCs and other HLCs. The code of the HLCs, therefore, can be kept light without the need to care for deadlines and synchronization. These tasks are done by the middleware. After publishing the computation results of the HLCs, the middleware sets the new deadline. If the computation of the HLCs took too long to meet the deadline, i.e., if the clock time is higher than the next start time, an error is logged and the deadline is adjusted to fit to the next full period.

The middleware is based on the DDS, which is a standardized protocol for decentralized communications in distributed systems based on the publish-subscribe pattern [22]. Besides being used in safety-critical systems, such as medical devices and air traffic control [21], DDS is also entering the automotive domain as part of the upcoming AUTOSAR Adaptive platform [7]. The protocol offers a variety of configurable Quality-of-Service (QoS) parameters, including dependable or best-effort communications. In contrast to the widespread Robot Operating System (ROS[24], DDS does not rely on a designated entity for service discovery or binding, which makes the resulting architecture more robust. At its core, DDS uses the User Datagram Protocol (UDP), which leads to lower communications latencies than middlewares based on the Transmission Control Protocol (TCP), such as ROS or Message Queuing Telemetry Transport (MQTT) [1]. Through the use of DDS, a variable number of vehicles can be part of experiments, without having to adapt the underlying communications architecture. Additionally, the CPM Lab architecture becomes more adaptable for extensions through the dynamic coupling of components in the communications architecture. Various commercial and open source implementations of DDS are available 111http://www.eprosima.org222 We use the RTI Connext DDS implementation333

Iii-E Layer Composition

Figure 9 summarizes the composition of the four layers in an example timechart of the communications between HLC, middleware, MLC and LLC of one vehicle. The middleware triggers the HLC periodically, which is the starting signal for the HLC. When the HLC finishes its computations, the middleware sends the resulting trajetroy to the MLC. When the trajectory becomes valid, i.e., when the valid-after time is exceeded the MLC computes the torque and steering depending on the vehicle’s current pose and the trajectory computed by the HLC. The LLC periodically receives torque and steering control signals of the MLC and sets the inputs of the actuators accordingly. In each period, the LLC reads the sensor data and sends them to the MLC

, where the sensor data are used for state estimation of the pose. The current state is then communicated from the

MLC to the middleware. Figure 10 illustrates the communications structure.

Fig. 9: An example of a timing chart of the messages between HLC, middleware, MLC and LLC of one vehicle.
Fig. 10: Illustration of the communications flow. The yellow arrows denote the externally computed vehicle poses. The blue arrows denote the timing signals and distribution of initial parameters by the LCC. The green arrows denote the vehicle states, containing position, orientation, speed and yaw angle. The red and black arrows denote vehicle commands. Red arrows follow the trajectory scheme, while the black arrows exist only in case of direct command mode.

Iv Lab Control Center (Lcc)

The LCC is the user interface of the CPM Lab. It deploys the software under test with all relevant parameters to the HLCs and performs startup and shutdown routines to set up the network and the CPM Lab. Figure 11 shows a screenshot of the LCC and its visualization of the map with all vehicles and their important information, e.g., battery charge. The visualization can display both simulated and real vehicles at the same time.

Fig. 11: A screenshot of the LCC. The CPM Lab uses real-world vehicles or simulated vehicles depending on the selection on the right. Real-world and simulated vehicles can be used in the same experiment.

Besides observing the Labs state, the LCC enables interaction with the vehicles by three different modes. First, the user can execute for an experiment. Second, there is a drag and drop feature in the visualization that allows to move single vehicles over the dragged path with constant velocity. Third, the user can use a joystick to manually control a vehicle. The LCC can deploy the software to the remote hardware, i.e., the NUCs or execute it locally on a single computer. After each experiment, the measured data are aggregated in the LCC for automatic evaluation and to support debugging. In the following, we will present the automation of experiments in the CPM Lab and how to use CPM Lab as test platform.

Iv-1 Automation of Experiments

For convenient experimentation, the CPM Lab automates the experiment setup and evaluation. The user selects the scenario, the decision-maker that should be tested, and optional parameters using the LCC. The LCC deploys the scenario, the software under test and all parameters to the NUCs and the vehicles. Then, the user starts the experiment. Each NUC and each vehicle collect experiment data, e.g., poses and computation times and sends the data to the LCC after the experiment. The LCC aggregates, evaluates, and visualizes the data. The data as well as evaluation plots can be exported after the experiment.

We developed a method for vehicles driving automatically towards predefined poses in [13]. We will integrate this method into the CPM Lab to simplify experiments, i.e., to automatically drive the vehicles to their starting poses of experiments.

Iv-2 Test Platform

The CPM Lab can be used as a test and experiment platform to extend simulations. We are currently developing a remote-access to the CPM Lab in order to enable experiments without personal presence. We will also provide the code and construction plans online to enable rebuilding the CPM Lab. In order to be able to test decision-making algorithms without the physical CPM Lab, we provide a simulation environment. The simulation environment simulates all parts of the CPM Lab to be able to use the same code for simulations and experiments. The testing scenarios will be compatible to the CommonRoad [4] format, an extension of Lanelets [5] to define common evaluation scenarios. Due to the open access of the CPM Lab and the use of CommonRoad benchmark scenarios, the CPM Lab will be a convinient experimental platform to benchmark decision-making software in experiments.

V Case Study

This section demonstrates a case study with four vehicles driving on the map shown in Figure 11. The map contains a highway, on- and off-ramps and a four way intersection. At the intersection, the vehicles choose a random lane as route and plan their trajectories accordingly. Collisions are avoided by speed adaption, while the steering angle is set to stay in lane. The trajectories are distributively planned on the NUCs and the vehicles receive trajectory commands. Our methods presented in [15, 14] are examples of distributed applications.

Trajectory source computation finished valid-after time stamp middleware received vehicle received vehicle applied
HLC 1 278 340 292 329 340
HLC 2 277 340 292 328 340
HLC 3 277 340 292 325 340
HLC 4 276 340 292 326 340
HLC 1 611 680 643 667 680
HLC 2 611 680 642 659 680
HLC 3 611 680 643 667 680
HLC 4 611 680 642 651 680
TABLE I: Time stamps (in ms) of two consecutive planning steps of the case study experiment. The timings are related to Figure 9.

Table I shows time stamps of two consecutive planning phases recorded in the case study experiment to show the effectiveness of the layered architecture consisting of HLC, MLC, LLC, and middleware, see Figure 9. The planning period has a length of 340ms and the time starts at . The columns show the following information:

  • Column 1: The HLC that planns the trajectory

  • Column 2: The time the HLC finished the computation of the trajectory

  • Column 3: The valid-after time stamp that

  • Column 4: The time the middleware received the trajectory

  • Column 5: The time the MLC received the trajectory

  • Column 6: The time at which the LLC receives the trajectory, i.e., when the trajectory is applied in the vehicle

Each row shows the timings for one step from trajectory planning to application of the control inputs in the vehicles. Each trajectory is shown in a separated row. Row 2 to 5 show the first planning phase and row 6 to 9 show the second planning phase.

The times of HLC 1 to HLC 4 finishing their trajectory computations in planning phase one differ in 2ms. All trajectories have the same valid-after time stamp, i.e., 340ms. The middleware received all trajectories at time and the receive time of the vehicles range differ in 4ms. The vehicles have an on-board cylce time of 20ms. At their next cycle after , i.e., when the trajectory becomes valid they synchronously apply the new trajectory data. In the second planning phase the HLCs finish their computations at 611ms. The valid-after time of all trajectories is 680ms. The time at which the middleware receives the trajectories differ in 1 ms and the vehicles receive the trajectories between and . At this time, if the valid-after time would not be used, vehicles two and four would apply the new trajectories at , i.e., one cycle time before vehicles one and three apply the trajectories at

. With a higher variance of the computation times of the

HLCs, the cycle difference of application of the trajectories may be higher. As this may lead to unexpected behavior, we mitigate this phenomenon by the common valid-after time. Therefore, all vehicles synchronously apply the new trajectory at the same point in time. This leads to deterministic and reproduceable experiments.

Vi Conclusion

This paper presented the CPM Lab, a seamless development environment for networked and autonomous vehicles. We presented our four-layered architecture that enables the use of the same software in simulations and experiments without any adaptions. Our middleware allows to adapt the number of vehicles during experiments and simulations. The CPM Lab can extend experiments with the 20 model-scale vehicles by unlimited additional simulated vehicles. We developed an architecture of HLC, MLC, LLC, and middleware to apply new trajectories in the vehicles deterministically and synchronously in a logical execution time approach. Furthermore, we developed the vehicles based on a model-scale RC platform and an IPS that computes the poses of the vehicles on the map. Due to its ability to simulate the vehicles and to test the decision-making software locally on a computer or distributed on several computation units, it provides several ways for XiL-testing. Different error and noise intensities allow to test networked algorithms under different conditions and evaluate their robustness. We used the CPM Lab in two practical courses in different study programs with 30 students each.

Vi-a Outlook

We are developing a remote-access to the CPM Lab. This will allow researchers and students to use the CPM Lab without personal presence. Furthermore, we will provide the code and construction plans online to enable rebuilding the CPM Lab.

For more convinient experiments, we will implement our method from [13] in the CPM Lab to automatically drive the vehicles to their starting poses of experiments.


This research is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the Priority Program SPP 1835 “Cooperative Interacting Automobiles” and the Post Graduate Program GRK 1856 “Integrated Energy Supply Modules for Roadbound E-Mobility”.


  • [1] I. B. M. C. (IBM) and Eurotech (2010) MQTT v3.1 protocol specification. Cited by: §III-D.
  • [2] B. Alrifaee, F. Heßeler, and D. Abel (2016) Coordinated non-cooperative distributed model predictive control for decoupled systems using graphs. IFAC-PapersOnLine 49 (22), pp. 216–221. Cited by: §II.
  • [3] B. Alrifaee (2017) Networked model predictive control for vehicle collision avoidance: vernetzte modellbasierte prädiktive regelung zur kollisionsvermeidung von fahrzeugen. Ph.D. Thesis, RWTH Aachen University. Cited by: §III.
  • [4] M. Althoff, M. Koschi, and S. Manzinger (2017) CommonRoad: Composable benchmarks for motion planning on roads. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 719–726. Cited by: §IV-2.
  • [5] P. Bender, J. Ziegler, and C. Stiller (2014) Lanelets: Efficient map representation for autonomous driving. In 2014 IEEE Intelligent Vehicles Symposium Proceedings, pp. 420–425. Cited by: §IV-2.
  • [6] R. K. Cole (2012) STEM outreach with the boe-bot. In Robots in K-12 Education: A New Technology for Learning, pp. 245–265. Cited by: §I.
  • [7] S. Fürst and M. Bechter (2016) AUTOSAR for connected and autonomous vehicles: the autosar adaptive platform. In 2016 46th annual IEEE/IFIP international conference on Dependable Systems and Networks Workshop (DSN-W), pp. 215–217. Cited by: §III-D.
  • [8] J. Gonzales, F. Zhang, K. Li, and F. Borrelli (2016) Autonomous drifting with onboard sensors. In Proceedings of the 13th International Symposium on Advanced Vehicle Control (AVEC), Cited by: §I.
  • [9] T. A. Henzinger, B. Horowitz, and C. M. Kirsch (2001) Giotto: A time-triggered language for embedded programming. In International Workshop on Embedded Software, pp. 166–184. Cited by: §III-D.
  • [10] N. Hyldmar, Y. He, and A. Prorok (2019-02) A Fleet of Miniature Cars for Experiments in Cooperative Driving. arXiv e-prints, pp. arXiv:1902.06133. External Links: 1902.06133 Cited by: §I.
  • [11] S. Karaman, A. Anders, M. Boulet, J. Connor, K. Gregson, W. Guerra, O. Guldner, M. Mohamoud, B. Plancher, R. Shin, et al. (2017) Project-based, collaborative, algorithmic robotics for high school students: programming self-driving race cars at mit. In Integrated STEM Education Conference (ISEC), Cited by: §I.
  • [12] S. Kernbach (2011) – open-hardware microrobotic project for large-scale artificial swarms. arXiv preprint arXiv:1110.5762. Cited by: §I.
  • [13] M. Kloock, L. Kragl, J. Maczijewski, B. Alrifaee, and S. Kowalewski (2019) Distributed model predictive pose control of multiple nonholonomic vehicles. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 1620–1625. Cited by: §IV-1, §VI-A.
  • [14] M. Kloock, P. Scheffe, L. Botz, J. Maczijewski, B. Alrifaee, and S. Kowalewski (2019) Networked model predictive vehicle race control. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 1552–1557. Cited by: §V.
  • [15] M. Kloock, P. Scheffe, S. Marquardt, J. Maczijewski, B. Alrifaee, and S. Kowalewski (2019) Distributed model predictive intersection control of multiple vehicles. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 1735–1740. Cited by: §V.
  • [16] M. Kloock, P. Scheffe, I. Tülleners, J. Maczijewski, B. Alrifaee, and S. Kowalewski (2020) Vision-based real-time indoor positioning system for multiple vehicles. arXiv preprint arXiv:2002.05755. Cited by: §II.
  • [17] A. Liniger, A. Domahidi, and M. Morari (2014-07) Optimization-based autonomous racing of 1:43 scale RC cars. Optimal Control Applications and Methods. External Links: Document Cited by: §I.
  • [18] J. McLurkin, A. McMullen, N. Robbins, G. Habibi, A. Becker, A. Chou, H. Li, M. John, N. Okeke, J. Rykowski, et al. (2014) A robot system design for low-cost multi-robot manipulation. In 2014 IEEE/RSJ international conference on intelligent robots and systems, pp. 912–918. Cited by: §I.
  • [19] D. Meek and D. Walton (1992) Clothoid spline transition spirals. Mathematics of computation 59 (199), pp. 117–133. Cited by: §II.
  • [20] M. O’Kelly, V. Sukhil, H. Abbas, J. Harkins, C. Kao, Y. Vardhan Pant, R. Mangharam, D. Agarwal, M. Behl, P. Burgio, and M. Bertogna (2019-01) F1/10: An Open-Source Autonomous Cyber-Physical Platform. arXiv e-prints, pp. arXiv:1901.08567. External Links: 1901.08567 Cited by: §I.
  • [21] Object Management Group (2019) Who’s Using DDS?. Note:[Online] Cited by: §III-D.
  • [22] G. Pardo-Castellote (2003) OMG data-distribution service: Architectural overview. In 23rd International Conference on Distributed Computing Systems Workshops, 2003. Proceedings., pp. 200–206. Cited by: §III-D.
  • [23] L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cap, Y. F. Chen, C. Choi, J. Dusek, Y. Fang, et al. (2017) Duckietown: an open, inexpensive and flexible platform for autonomy education and research. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1497–1504. Cited by: §I.
  • [24] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng (2009) ROS: an open-source Robot Operating System. In ICRA workshop on open source software, Vol. 3, pp. 5. Cited by: §III-D.
  • [25] M. Reiter, M. Wehr, F. Sehr, A. Trzuskowsky, R. Taborsky, and D. Abel (2017-07) The IRT-buggy – vehicle platform for research and education. IFAC-PapersOnLine. External Links: Document Cited by: §I.
  • [26] F. Riedo, M. Chevalier, S. Magnenat, and F. Mondada (2013) Thymio ii, a robot that grows wiser with children. In 2013 IEEE Workshop on Advanced Robotics and its Social Impacts, pp. 187–193. Cited by: §I.
  • [27] P. Robinette, R. Meuth, R. Dolan, D. Wunsch, 1. E. Solutions, LLC, Rolla, and M. USA (2009) LabRat: miniature robot for students, researchers, and hobbyists. In The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA, Cited by: §I.
  • [28] M. Rubenstein, C. Ahler, and R. Nagpal (2012) Kilobot: a low cost scalable robot system for collective behaviors. In 2012 IEEE International Conference on Robotics and Automation, pp. 3293–3298. Cited by: §I.
  • [29] M. Rubenstein, B. Cimino, R. Nagpal, and J. Werfel (2015) AERobot: an affordable one-robot-per-student system for early robotics education. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6107–6113. Cited by: §I.
  • [30] P. Scheffe, M. Kloock, A. Derks, J. Maczijewski, B. Alrifaee, and S. Kowalewski (2020) Networked and autonomous model-scale vehicles for experiments in research and education. IFAC. Note: IFAC World Congress, accepted Cited by: §II.
  • [31] A. Stager, L. Bhan, A. Malikopoulos, and L. Zhao (2017) A scaled smart city for experimental validation of connected and automated vehicles. arXiv preprint arXiv:1710.11408. Cited by: §I.
  • [32] M. Völker, M. Kloock, L. Rabanus, B. Alrifaee, and S. Kowalewski (2019) Verification of cooperative vehicle behavior using temporal logic. IFAC-PapersOnLine 52 (8), pp. 99–104. Cited by: §II.
  • [33] S. Wilson, R. Gameros, M. Sheely, M. Lin, K. Dover, R. Gevorkyan, M. Haberland, A. Bertozzi, and S. Berman (2016) Pheeno, a versatile swarm robotic research and education platform. IEEE Robotics and Automation Letters 1 (2), pp. 884–891. Cited by: §I.
  • [34] XRAY (2010-12) M18 PRO LiPo. Note: Websiteviewed 31 October 2019 External Links: Link Cited by: §II.