Networked and Autonomous Model-scale Vehicles for Experiments in Research and Education

04/17/2020 ∙ by Patrick Scheffe, et al. ∙ 0

This paper presents the μCar, a 1:18 model-scale vehicle with Ackermann steering geometry developed for experiments in networked and autonomous driving in research and education. The vehicle is open source, moderately costed and highly flexible, which allows for many applications. It is equipped with an inertial measurement unit and an odometer and obtains its pose via WLAN from an indoor positioning system. The two supported operating modes for controlling the vehicle are (1) computing control inputs on external hardware, transmitting them via WLAN and applying received inputs to the actuators and (2) transmitting a reference trajectory via WLAN, which is then followed by a controller running on the onboard Raspberry Pi Zero W. The design allows identical vehicles to be used at the same time in order to conduct experiments with a large amount of networked agents.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Supplementary material

A demonstration video of this work is available at https://youtu.be/aH1Q8AKXmUs.

The vehicle software, bill of materials and a production tutorial is referenced from our website http://cpm.embedded.rwth-aachen.de.

1 Introduction

Research on networked and autonomous vehicles is ongoing since multiple decades. When new methods are developed, the necessity of testing them arises. This can be done with little effort in simulation as in naumann2018. The meaningfulness of results in simulation is restricted, as only aspects of reality that are modeled are considered. More meaningful are experiments in true scale, but those require a high effort and are expensive, especially when testing methods on networked vehicles, as multiple test platforms are required. Midway between those options, methods can be tested on scaled testbeds. In scaled experiments, many challenges of the true-scale problem are apparent, e.g. communication delays and losses, synchronization problems or actuator dynamics. Another benefit compared to the true-scale experiment is that setting up the experiment is simpler and quicker, which allows for rapid development cycles.

The curriculum at a university should prepare students for research in networked and autonomous vehicles. This includes for example the design of algorithms for embedded hardware, designing controllers for nonlinear systems, or coupling of networked agents for collision avoidance. Seeing an algorithm one has developed running in an experiment fills students with enthusiasm about learning concepts of control by applying it to the CPM system. The modified model-scale vehicle proposed in this paper enables those experiments.

This paper is structured as follows.Section 2 compares model-scale vehicles with Ackermann steering geometry from literature. Section 3 describes how we transform a model-scale race car to a networked and autonomous vehicle with off-the-shelf components, excluding a printed circuit board. The lab environment in which the vehicles operate is sketched in section 4. In section 5, examples are given to show in what form the vehicles can be used in control education.

2 Existing platforms

In the last decade, a number of model-scale testbeds have been developed. In paull2017, 15 platforms for education and research with a cost lower than $] are compared. These differ from the model-scale vehicle we present, as they are wheeled differential drive platforms or platforms with slip-stick forwards motion.

In table 1, an overview of recently developed model-scale vehicles with Ackermann steering geometry is given. Having a scaling factor of 1:43 and 1:24 respectively, the ORCA Racer liniger2014 and the Cambridge Minicar hyldmar2019 are smaller than the vehicle presented in this work. The ORCA Racer is based on the Kzosho dnano RC race car, but substitutes its original board with a custom PCB. This board features an ARM Cortex-M4 processor, Bluetooth communication and an IMU. The vehicles are designed to receive externally computed control inputs via Bluetooth, and apply these inputs with an onboard LLC. The Cambridge Minicar is based on the CMJ RC Cars Range Rover Sport. Its controlled by a Raspberry Pi Zero W. These vehicles are controlled by sending externally computed control inputs via broadband radio.
The BARC from gonzales2016, the MIT Racecar from karaman2017 and the F1/10 from okelly2019 share the scale of 1:10. The mechanical base for all three vehicles is a Traxxas rally car. At this size, the vehicles are capable of carrying more computational power and more sensors additionally to an IMU. In the BARC, 4 rotary encoders are installed for speed measurement and a camera is mounted. Optionally, it is possible to install a lidar and a GNSS receiver. The HLC and main computing unit is an ODROID-XU4, the LLC, i.e. sensor read and actuator control, is performed with an Arduino Nano. The setup of the MIT Racecar and the F1/10 is similar. The speed is given by a VESC electronic speed controller, and optional sensors include a 3D stereo cameras and a lidar. The main computing element is the Nvidia Jetson Tegra X1. The greater computing power and additional sensors allow for onboard autonomy. This is also a reason why these setups cost around $]. At the scale of 1:10, a lot of space is required for indoor experiments on cooperative driving with multiple vehicles. Due to the cost and the size of the platforms, indoor experiments with a large amount of vehicles are difficult.
At the largest scale of 1:5, the GATech Auto-Rally from williams2016 and the IRT buggy from reiter2014 and reiter2017 are designed for outdoor experiments. The Auto-Rally is equipped with two forward facing cameras, a Lord Microstrain 3DM-GX4-25 IMU, a GNSS receiver, and wheel speed sensors. The computational power is provided by an Intel quad-core i7 processor, 16GB RAM, and an Nvidia GTX-750ti graphics card. With this elaborate hardware setup, the Auto-Rally is used for aggressive driving. The IRT buggy is designed for versatile use. It shares the separation of HLC and LLC in two hardware components with the BARC. Sensors include a GNSS-sensor, an IMU, and two rotary encoders at the rear wheels. Its modular setup allows for other sensors such as a lidar. Similar to the ORCA Racer, this platform is not open source.

Vehicle name Scale
ETHZ ORCA Racer 1:43
Cambridge Minicar 1:24
µCar 1:18
F1/10 1:10
BARC 1:10
MIT Racecar 1:10
GATech AutoRally 1:5
IRT buggy 1:5
Table 1: Recent model-scale Ackermann-steering platforms

The larger model-scale vehicles are equipped with sensors and computing power to allow autonomy. The µCar, as well as the ORCA Racer and the Cambridge Minicar are reliant on the interaction with a lab environment. This lab environment provides the positioning of the vehicles and therefore substitutes the GNSS of the real world experiment. In the case of the Cambridge Minicar, this is done with an OptiTrack motion capture system that requires multiple cameras, while the lab environment of the ORCA Racer only uses one camera, similar as our CPM lab. In contrast to those two labs, in addition to the option of sending control inputs to the vehicle, a trajectory following mode exists, where an onboard controller determines the control inputs necessary to follow a given trajectory.

3 Vehicle setup

Figure 1: The µCar, a 1:18 model-scale vehicle

The model-scale vehicle presented here is shown in fig. 1. It is an Ackermann-steered, non-holonomic mobile robot in the scale of 1:18 compared to a typical passenger vehicle. Its length is , its width , its height , its wheelbase and its weight is . The vehicle has a maximum speed of . The power consumption in standby (without steering or acceleration) is . In experiments, the battery powers the car for about five hours. table 2 lists the components used in the model-scale vehicle. The cost calculation refers to an order of 20 vehicles, as a single PCB would cost , but ordering a panel cluster with 20 PCB on one board reduces the price of a unit to . Assembling a vehicle takes one person around six hours of time.

Item Application Cost [€]
XRAY M18 Pro Mechanical platform
Gens ace 3500mAh LiPo Battery
NF113LG-011 Motor
Hitec D89MW Servo
PCB Board
Raspberry Pi Zero W MLC
8GB SD Card Memory
ATmega2560 LLC
Pololu VNH5019 Motor Driver
DeboSens BNO055 IMU
Eletronic Parts
SUM
Table 2: Components used in the µCar; cost rounded to the next integer

Using an off-the-shelf mechanical platform allows for a quick start in building a networked and autonomous model-scale vehicle. We use the mechanical components from the XRAY M18 PRO LiPo. It is a 1:18 micro car that is designed for holding a battery, a servo motor for steering and a motor for propulsion. The motor drives all four wheels as the shaft is connected to each one with differentials. The minimum turning radius given by the mechanical design is approximately .

The vehicle’s hardware architecture is illustrated in fig. 2. A Raspberry Pi Zero W takes the role of the MLC on the vehicle. It is responsible for the communication via WLAN with the HLC, as described in section 4, and for clock synchronization using the NTP. Additionally, the MLC fuses the sensor data to obtain accurate localization. The MLC also supplies the LLC with control inputs. This is either realized by forwarding control inputs received via WLAN, or by running a controller for trajectory following as described in the next paragraph. The tasks on the Raspberry are repeated in a frequency of , i.e. a time interval of .
In order to ensure the most individual and adaptable handling of the vehicle, we designed a custom PCB connecting the components.. This PCB serves as an interface between the actuators, sensors and control electronics. The PCB with its components is shown in fig. 3. This PCB embeds an ATmega2560 microcontroller with a clock rate. This microcontroller represents the LLC, reading the sensor data and applying the control inputs to the actuators. The hardware separation in MLC and LLC introduces a hierarchical architecture, which creates a hardware abstraction layer. Even in the case the MLC is changed, the interface to the hardware will stay the same.
In a frequency of , the MLC and the LLC exchange information via SPI. The MLC provides the control inputs, while the LLC returns the sensor readings. A TXB0104 bidirectional voltage-level translator was installed for level adaptation of the SPI bus. The 3.3V SPI level of the Raspberry is converted into a 5V SPI signal for the ATmega.
The IMU is a DeboSens BNO055 and provides the required sensor data using a 9-DOF sensor. The ATmega microcontroller can retrieve this data via the two wire I2C bus.
The motor driver board VNH5019 drives the single brushed DC motor of the vehicle via an integrated H-Bridge. The ATmega controls the engine driver via a PWM signal with a frequency of up to . A current sensing output provides the ATmega with a signal which is proportional to the current applied to the motor. The power source is a LiPo battery which provides a voltage. This voltage is directly fed to the motor driver unit. Since the Raspberry and all the other components (except the motor driver unit) are specified to or respectively, the voltage is reduced by an NCP1117 LDO voltage regulator. To protect the LiPo battery as well as the electronic components a battery protection circuit was inserted.
Three Hall-effect sensors mounted on a separate odometer board measure the motor shaft rotation. A diametrically polarized magnet is attached to the motor shaft in order to make the rotational motion of the axis electrically visible. With this setup, it is possible to distinguish six different motor angles per rotation. The digital signals of the Hall sensors are directly transmitted to three I/Os of the ATmega, which translates the signals into rotation ticks.
Four LEDs are installed on the vehicle, which are also connected to the odometer board and controlled by the ATmega. The outer three LEDs are used for positioning with an IPS, while the inner one communicates the vehicle’s ID.

IC
Mid-level controller
Raspberry Pi Zero W
Low-level controller
ATmega2560
Motor
driver
Current
PWM
IMU
Odometer
Battery
protection
Battery
Servo
LEDs
Interrupts
PWM
GPIO
Voltage
SPI
Figure 2: Vehicle hardware architecture
IMU
ATmega2560
Raspberry Pi
Zero W
Motor
driver
Odometer board
connector slot
Figure 3: The PCB on the vehicle with several components installed

The vehicles can operate in the two different modes (1) external control and (2) trajectory following. If a trajectory is provided to the vehicle, the MLC determines the control inputs to follow that trajectory. The trajectory is provided as a list of tuples

. Usually, a trajectory point is understood to be a tuple of time and position. The controller needs reference trajectory points at controller-specific points in time. If the time step between trajectory points is assumed to be larger than the control time step, the MLC interpolates the trajectory to determine a sensible reference point. By fixing the derivative of the trajectory in each point, the MLC is enabled to interpolate between trajectory points with cubic Hermite splines. Additionally, if new reference trajectory points are transmitted, the MLC-internal reference trajectory will not change. If the MLC receives control inputs, it switches to directly applying those to the actuators. This behavior allows for manual control of the vehicle with a gamepad or a keyboard for example. It is also possible to compute control inputs depending on the vehicle state and reference trajectory externally and send those via WLAN.

4 Environment: CPM lab

Figure 4: CPM lab overview: vehicles communicate via WLAN with their respective computers and the IPS.

As mentioned earlier, the vehicles are used for experiments in a lab environment as visualized in fig. 4, which we call CPM lab. This lab provides a driving area of about . Communication between the vehicles and this environment is established through DDS RTI Connext DDS. An IPS provides the vehicles with their pose (position and orientation) with a worst-case accuracy of and . A camera detects the position of the three LEDs on the vehicle. These LEDs define a vehicle’s pose due to their arrangement on the vehicle in a non-equilateral triangle. The vehicle corresponding to a detected pose is identified with a signal code sent by the fourth LED on the vehicle as shown in kloock2020

. Additionally, a reference trajectory or the actuator inputs for the vehicles are sent via WLAN. The vehicle returns its current state, which includes the estimated pose as well as sensor readings and actuator commands.

5 The vehicles in control education

The vehicle’s hierarchical architecture allows students to work at different levels of abstraction.

  1. It is possible to learn the basics of embedded programming when working with the LLC (the ATmega2560). At this level, students need to understand MCU data sheets in order to determine how to read sensors and control actuators correctly in C-code.

  2. At the level of the MLC (the Raspberry Pi Zero W), tasks like trajectory control or sensor fusion can be tackled. Measurements of multiple sensors need to be fused for vehicle localization in the proposed setup, which reflects the real world application. The IPS provides absolute positioning, but its measurement data is transmitted to the vehicle via WLAN, which makes the measurements relatively slow and also unreliable. On the other hand, onboard sensors like the IMU and the odometer are fast and accurate for short distances, but need a reference. A controller for trajectory following can be implemented as simple as a PID-controller, or more advanced as a mpc. The µCarcurrently uses mpc for trajectory following. Restrictions by the limited computation power of the MLC still apply, which motivates efficient algorithms and a programming language like C++.

  3. On the highest abstraction level, ideas can be developed on an external PC with programming languages common in optimization (e.g. MATLAB, Python). It is possible to work on trajectory planners as well as on external controllers for the vehicles, depending on which mode of operation one wishes to use.

The modularity allows to focus on one specific part of networked and autonomous vehicles. It is possible to provide necessary interfaces with working components, so the content to be taught can be chosen freely and appropriately.

A basis for many control tasks is an appropriate model of the system. A system model is useful for e.g. simulation or controller design. The purpose of the model defines its requirements. For simulation, the goal might be to represent the system as truthfully as possible, while for a controller using mpc the ability for fast computation might be necessary. Since having a system model is the prerequisite of many aspects in control, we show an example of how a model for the model-scale vehicles can be obtained. The goal of this endeavor is to illustrate how the vehicles might serve as a platform to control engineering education.

5.1 Vehicle dynamics model

In this example, we aim for a model that is suitable for mpc of a vehicle’s pose and velocity on embedded hardware. The model needs to be simple enough for quick computation, while accurate enough for predicting the states. We propose a kinematic bicycle model with some added terms to account for various errors.

The model has the states and inputs

(1)

where and are the x- and y-position respectively, is the yaw angle, the speed at the vehicle rear axle, the dimensionless motor command, the dimensionless steering command and the battery voltage. The battery voltage is of course not an input set by the controller, but one that affects the system dynamics.

Figure 5: Kinematic bicycle model of the vehicle

The model used to describe the vehicle’s dynamics is a non-linear kinematic bicycle model according to rajamani2011. Similar to alrifaee2017, it is assumed that no slip occurs on the front and rear wheels, and no forces act on the vehicle. The velocity dynamics are described with a  behavior, which results in the following equations

(2)

The model variables are illustrated in fig. 5. is the distance from the rear axle to the vehicle’s reference point, is the distance between front and rear axle, is the steering angle which is related to the steering command , and are the gain and time constant of the velocity’s  behavior and is the input velocity, which is modelled as a function of the motor command and the battery voltage . The change of the vehicle’s - and -position is dependent on the velocity at the vehicle’s reference point. From the fact that the angular velocity is equal at every point of the vehicle, we get

(3)

where and are the radii of the circular movement at the vehicle center and rear axle respectively. With (3) and Pythagoras’ theorem we obtain

(4)

In order to simplify computational tasks on the model, we can approximate some terms with Taylor series at the point . The side slip angle due to steering is approximated with a first-order Taylor series

(5)

Equation (4) is simplified with a second-order Taylor approximation:

(6)

Now substituting the model’s variables with parameters and introducing some parameters to account for various inaccuracies, the parameterized bicycle model is given by:

(7)

An extra parameters introduced is , which compensates the calibration error between IPS speed and odometer speed. and substitute the model parameters in (6) and (5) respectively. takes care of the model parameter as well as the conversion of steering command to steering angle. substitutes in the velocity’s  model. The steady state velocity is modeled as a power function, where the constant factor is represented by and the exponent by . In order to avoid the trouble that comes with negative bases and real exponents, the absolute value of the motor command is used as the base and the sign of is multiplied. As the motor strength depends on the battery voltage, we added it as a multiplying factor with the parameter . is an extra parameter introduced to correct steering misalignment, while accounts for a yaw calibration error in the IPS.

This is an end-to-end, grey-box model for the vehicle dynamics. The model parameters are not measured directly, but optimized to best fit the vehicle behavior as shown in section 5.3.

5.2 Model discretization

The model is discretized with the explicit Euler method, as follows:

(8)

Here, is obtained from the continuous vehicle dynamics model (7). This discretization is chosen for its simplicity and computational efficiency. Measurements are taken in time intervals of . This short time interval compensates partly for the inaccuracies introduced by the method used, and the discretization is included in the parameter identification.

5.3 Parameter identification

Since the dynamics of nonholonomic vehicles are nonlinear, model identification procedures for nonlinear systems need to be used. Identifying the vehicle dynamics can be achieved by formulating the task as an optimal parameter estimation problem. The optimization tries to find a set of model parameters that best reproduce the measurement data. A measurement vector at timestep

contains:

(9)

Here, and is the IPS x- and y-position respectively, is the IPS yaw angle and the odometer speed.

The optimization problem is then given as

(10)
subject to

where are the measured inputs, is the discrete vehicle model as in (8), is the vector of model parameters to , is a constant timestep of and is the error penalty function. Since the vehicle pose lives in , an adequate error metric needs to be used. We used a weighted quadratic error function and accounted for the period of in the yaw error function using .

This kind of optimization problem is not well suited for identifying the delay times. The optimization problem is therefore solved multiple times for combinations of delay times in an outer loop. The delays that create the lowest objective value are taken as the solution.

Figure 6: Driven trajectory for measurement data collection

The measurement data used in the parameter optimization is shown in fig. 6. This data is sliced into parts of 100 consecutive data points, i.e. time intervals of , which are fed to the optimization problem as experiments. The resulting parameters are

(11)

The delays identified are 1 timestep for the IPS data, 0 timesteps for the local measurement information and 5 timesteps for the motor and steering actuation.

6 Conclusion

This paper presented how a regular RC race car can be transformed to a networked and autonomous vehicle with mainly off-the-shelf components. The vehicles are used for teaching in multiple courses at RWTH Aachen University at the moment. We are eager to see the impact of applying concepts on real control systems on the students’ learning experience.

Currently, a fleet of 20 vehicles is being built up. This should enable students and researchers alike to perform various experiments on networked and autonomous driving in moderately large scale networked systems.

References

Appendix A Required Demonstrator Space

The 1:18 model-scale vehicles will be presented with a reduced lab environment. For that, we need

  1. space of about and

  2. power outlets.