Quadrotor Autonomous Landing on Moving Platform

08/10/2022
by   Pengyu Wang, et al.
Shandong University
IEEE
0

This paper introduces a quadrotor's autonomous take-off and landing system on a moving platform. The designed system addresses three challenging problems: fast pose estimation, restricted external localization, and effective obstacle avoidance. Specifically, first, we design a landing recognition and positioning system based on the AruCo marker to help the quadrotor quickly calculate the relative pose; second, we leverage a gradient-based local motion planner to generate collision-free reference trajectories rapidly for the quadrotor; third, we build an autonomous state machine that enables the quadrotor to complete its take-off, tracking and landing tasks in full autonomy; finally, we conduct experiments in simulated, real-world indoor and outdoor environments to verify the system's effectiveness and demonstrate its potential.

READ FULL TEXT VIEW PDF

page 5

page 6

04/28/2022

Simultaneous Control and Trajectory Estimation for Collision Avoidance of Autonomous Robotic Spacecraft Systems

We propose factor graph optimization for simultaneous planning, control,...
11/09/2020

EGO-Swarm: A Fully Autonomous and Decentralized Quadrotor Swarm System in Cluttered Environments

This paper presents a decentralized and asynchronous systematic solution...
03/22/2021

Autonomous Flight through Cluttered Outdoor Environments Using a Memoryless Planner

This paper introduces a collision avoidance system for navigating a mult...
09/21/2022

D-InLoc++: Indoor Localization in Dynamic Environments

Most state-of-the-art localization algorithms rely on robust relative po...
09/24/2018

Oscillation Damping Control of Pendulum-like Manipulation Platform using Moving Masses

This paper presents an approach to damp out the oscillatory motion of th...
01/29/2022

Design of Outdoor Autonomous Moble Robot

This study presents the design of a six-wheeled outdoor autonomous mobil...
03/23/2019

HouseExpo: A Large-scale 2D Indoor Layout Dataset for Learning-based Algorithms on Mobile Robots

As one of the most promising areas, mobile robots draw much attention th...

I Introduction

In recent years, quadrotor Unmanned Aerial Vehicles (UAVs) that can take off and land vertically (VTOL) have played a key role in power line inspection, express delivery, and farmland sowing due to their flexibility and ease of deployment [4][19][6]. Among them, autonomous take-off and landing technology without external positioning (e.g. GPS and motion capture) is an important part, which is essential for UAVs to perform more complex tasks and cooperate with ground mobile robots.

Sani et al. [14]

proposed a visual and inertial navigation method combined with Kalman filter and proportional-integral-derivative (PID) controller to achieve relative pose estimation and accurate UAV landing on ground QR code. To estimate the state quantities of dynamic targets and track them efficiently, Lee

et al. [5] adopted an image-based visual servoing method in two-dimensional space to generate reference velocity commands, which were fed to an adaptive sliding mode controller while considering an adaptive rule for the ground effect in landing process. However, these methods could only achieve static and very low-speed autonomous landing and at the same time only indoor experimental verification.

Accurate position estimation of UAV and landing platform is necessary for autonomous landing. Mellinger et al. [9] designed robust quadrotor planning and control algorithms for UAV’s perching and landing, but they needed a VICON111https://www.vicon.com/ motion capture system to track the quadrotor as well as the landing surfaces. Daly et al. [3] proposed a coordinated landing control scheme. In their method, they used a joint decentralized controller to reach the rendezvous point for landing and the controller performed stably in the presence of delay. However, their approach relied on expensive Real-Time Kinematic (RTK) GPS on both UAV and Unmanned Ground Vehicle (UGV) for sub-centimeter positioning. The reliance on external positioning systems was not practical in many scenarios.

To let the UAV land on a fast-moving target, Borowczyk et al.

used Kalman filtering to calculate the accurate pose of the UAV relative to the landing pad by fusing multiple sources of measurement information. They used the proportional (P) controller in the distance and the proportional-derivative (PD) controller in the near, and the mobile platform could reach a high speed at 30km/h

[1]. However, the UAV did not perform motion planning when tracking and landing, which means that when there are obstacles on the way, it may cause collisions because the UAV cannot avoid obstacles.

To address the challenges mentioned above, the main contributions of this paper are summarized as follows:

(1) We design a landing positioning system based on AruCo squared fiducial marker [13] to be placed on the moving platform. The UAV can detect and calculate the pose relative to the UGV quickly, accurately and at a low cost.

(2) We utilize a gradient-based local motion planning method [17] to generate a collision-free, smooth and feasible reference trajectory for UAV in tracking and landing.

(3) We design an automatic state machine that enables UAV to achieve take-off, tracking and landing missions.

The rest of this paper is organized as follows. We first introduce the UAV-UGV system in Section II and then present the detection and positioning of the mobile landing platform in Section III. Furthermore, we demonstrate the local motion planning algorithm and landing state machine in Section IV. Finally, in Section V and Section VI, we conduct experiments, analyze the results and give future plans.

Ii Quadrotor System Design

Fig. 1: Coordinate systems of a quadrotor.

Ii-a Modeling

In this paper, it is assumed that the quadrotor is a rigid body with uniform mass distribution and axisymmetric, the mass and rotational inertia are constant, and the center of gravity coincides with the geometric center, as shown in Fig. 1. represents the position of quadrotor. and denote velocity and angular velocity, respectively. The roll/pitch/yaw angle of a quadrotor is represented by . There are two coordinate systems: world inertial frame and body-fixed frame . The transformation between two coordinate systems is represented by

(1)

According to the Newton-Euler equation, the position and attitude dynamics equation of the quadrotor is expressed as follows [10]:

(2)
(3)

where is the thrust of the motor, is air resistance, and denote inertial and torque, respectively.

The position and attitude motion equation of the quadrotor can be written as

(4)
(5)

Ii-B Hardware Setup

In this paper, we use a P450 quadrotor designed by Amovlab222https://wiki.amovlab.com/ and a Husky A200 UGV built by Clearpath333https://github.com/clearpathrobotics/clearpath_husky, as shown in Fig. 2

. The core components of the quadrotor include open source autopilot hardware Pixhawk 

[8], software PX4 flight control [7], onboard computer NVIDIA Jetson NX, monocular wide-angle camera, Intel Realsense T265 tracking camera and Intel Realsense D435i depth camera.

Fig. 2: UAV-UGV system.

Ii-C State Estimation

PX4 provides an Extended Kalman Filter (EKF) [12] -based algorithm to process sensor measurements and calculate an estimate of flight states, as shown in Fig. 3. In order to get more accurate pose information and use it in a GPS-denied environment, we use the off-the-shelf Visual-Inertial Odometry (VIO)444https://github.com/IntelRealSense/realsense-ros method in Intel Realsense T265 to obtain pose information of the UAV relative to the take-off point.

Fig. 3: UAV state estimation.

Ii-D Flight Controller

A standard cascaded control architecture is recommended in PX4 for multirotor and the controllers are a mix of P and PID controllers, as shown in Fig. 4. The controller consists of position control and attitude control. Inside the controller, the outer loop is for position (angle) control and the inner loop is for acceleration (angular velocity) control. The inner loop responds faster than the outer loop and acts directly on the motor; hence the inner loop parameters are vital when adjusting the controller parameters. In the offboard mode of PX4, position, velocity, angle and angular velocity can be controlled separately.

Fig. 4: PX4 cascaded PID controller.

Iii Detection and Localization of Landing Platform

In this section, we introduce the detection of the landing platform and the calculation of the relative pose relationship between the UAV and the UGV, which is crucial to achieving a reliable landing.

Iii-a Pinhole Camera Calibration

For the purpose of mapping objects in three-dimensional space to two-dimensional camera space, we achieve pinhole camera calibration [16] to establish the geometric model of a specific camera. The geometric model parameters are called camera parameters, including intrinsic and extrinsic parameters.

(6)

Iii-B Landing Pad Detection

Fig. 5: AruCo marker-based landing pad.

In this paper, we use the AruCo squared fiducial marker library [13] for target detection. We implement the detection of markers in OpenCV [2], which includes three steps: using adaptive thresholding to obtain borders, using polygonal approximation and removing too close rectangles, performing marker identification. The AruCo landing pad design is inspired by Qi et al. [11], which consists of four different sizes and ten kinds of markers, as shown in Fig. 5. These markers ensure that the moving platform can be seen near, far and sideways. The actual scale of each AruCo marker and its accurate relative relationship with other surrounding markers are significant. For a more accurate landing, we define and test markers’ maximum detectable and active distance ranges, as shown in Tab. I. The maximum distance range and offset means the maximum x, y, z distance of the UAV from the center of a specific marker when the UAV can detect that specific marker. The active range is a subset of the maximum range, which means the distance at which the marker at a specific location plays a major role in calculating relative pose.

Marker type No. Max detectable z-distance range Max (x,y) offset
43 (2.5 cm) (Land,0.15) m (0.15,0.15) m
5-8 (6.4 cm) (Land,0.50) m (0.39,0.39) m
1-4 (9.5 cm) (Land,1.15) m (0.90,0.90) m
68 (25.7 cm) (Land,3.00) m (1.42,1.42) m
Marker type No. Active z-distance range Active (x,y) offset
43 (2.5 cm) (Land,0.15) m (0.15,0.15) m
5-8 (6.4 cm) (0.20,0.30) m (0.00,0.20-0.39) m
1-4 (9.5 cm) (0.40,1.00) m (0.70,0.70) m
68 (25.7 cm) (1.00,3.00) m -
TABLE I: Marker detection range

Iii-C 3D Pose Estimation

Knowing n reference 3D points and corresponding 2D projections, we can construct the Perspective-n-Point (PNP) problem to estimate the camera’s pose relative to the moving platform. When solving the PNP problem, similar to [11], we use the nonlinear optimization method Bundle Adjustment to minimize the re-projection error and then estimate the rotation and translation between the camera coordinates and the marker coordinates. Through coordinate transformation, the coordinates in the landing pad system can be transformed into the camera coordinate system , body-fixed coordinate system and the world inertial system (ENU) successively. Finally, the pose estimation of the UAV relative to the landing platform is expressed as

(7)

where denotes position, rotation and translation in different coordinate frames, respectively and is the known camera mount offset.

Iv Planning Algorithm and Landing State Machine

Our paper uses a gradient-based local motion planning method [17] [18], which does not need to construct the Euclidean Signed Distance Field (ESDF) in advance and therefore greatly reduces the planning time. Constructing the ESDF field takes not only plenty of time but also the trajectory calculated by the ESDF may fall into a local minimum. The method includes three steps: trajectory initialization, gradient-based trajectory optimization, time re-assignment and trajectory refinement.

Iv-a Front-end B-Spline Initialization

In the initialization phase, a uniform B-Spline curve that satisfies the final constraints but does not consider obstacles is generated. An estimate of the collision force is performed and for each segment of the track where collision is detected, the A* algorithm is used to generate a collision-free path . Then for each control point of the collision line, an anchor point is generated on the obstacle surface and the distance from to the obstacle is set as: , where

are unit vectors from

to .

Iv-B Back-end Trajectory Optimization

The B-spline curve of the front-end part is uniquely determined by the degree , control points , and vector . The optimization problem is modeled as . In the formula, represents smoothness term, collision term, dynamic feasibility term, respectively and is the corresponding coefficient. According to the convex hull property, the smoothness cost and feasibility cost is set as

(8)
(9)

where is acceleration, is jerk, are weights and is a two-order continuously differentiable function of control points. Collision cost pushes control points until (safe distance) and it is defined as

(10)

where is also a two-order continuously differentiable function.

Iv-C Time Re-assignment and Trajectory Refinement

An additional time re-assignment step is necessary to avoid aggressive motion and ensure that the trajectory satisfies the kinodynamic constraints. Then, an curve fitting method is presented to make the refined trajectory maintain an almost identical shape to the original trajectory . After obtaining the initial value of , the refined optimization problem is defined as and the third term is the curve fitting term, which is defined as

(11)

where and represents axial and radial displacement of two curves, respectively. is set to 20 and is set to 1.

Iv-D Landing State Machine

The UAV’s autonomous take-off and landing task is driven by an autonomous state machine, as shown in Fig. 6. The main steps are as follows: UAV takes off and flies to a preset point; UAV hovers and waits for the landing target to appear; UAV sees the landing point and calculates the relative pose; UAV’s planner plans a reference trajectory in real-time; UAV’s cascade PID controller tracks the generated trajectory; UAV meets the landing conditions, descends, and completes the landing task. It is worth noting that because the target point is moving, the UAV usually does not entirely execute the trajectory given by the planner, and will instead execute a new reference trajectory.

Fig. 6: Landing state machine.

V Experiments and Results

In this section, we test our method in both simulation and real-world experimental environments.

V-a Simulation Experiments

We conduct simulation experiments on the ROS/Gazebo platform named Prometheus555Qi, Y., Jin, R., Jiang, T., and Li, C. (2020). Prometheus, Amov lab. Retrieved October 20, 2020, https://github.com/amov-lab. and we also use part of the ROS packages provided by Prometheus in real-world experiments.

The UAV takes off to the pre-set point and waits to detect the landing platform. After detecting the UGV, the UAV continuously plans the trajectory and tracks it. Then UAV descends and when the height from the landing point is less than 0.6m, it starts to land and completes the landing. The trajectory of UAV is shown in Fig. 7 and main flight data is shown in Tab. II.

Fig. 7: Simulation trajectory. Target speed: 2.8 km/h.
Experiment Distance Target Speed UAV Max Speed UAV Average Speed Planning Time
Simulation 20 m 2.8 km/h 4.1 km/h 0.8 km/h 4.75 ms
Indoor 1 5.2 m 0.0 km/h 4.8 km/h 0.4 km/h 0.83 ms
Indoor 2 3.7 m 0.8 km/h 2.1 km/h 0.3 km/h 1.60 ms
Outdoor 1 12.9 m 2.16 km/h 6.6 km/h 2.7 km/h 3.16 ms
Outdoor 2 14.3 m 3.24 km/h 8.2 km/h 3.3 km/h 3.49 ms
TABLE II: Key Flight Data

V-B Real-world Experiments

V-B1 Indoor Environment

We first conduct experiments on static and dynamic targets in indoor environments. The UAV can accurately land on the target within the field of view. In a moving target experiment, the UGV moves linearly in the positive direction of the y-axis at a speed of 0.8 km/h. After the UAV takes off to the target point , it starts to follow, and when the landing conditions are satisfied (tracking distance over 2.5m or external landing command), it begins to descend to 0.2m, and then quickly lands. The trajectory of the UAV and UGV is shown in Fig. 8 and the experimental data is shown in Tab. II.

Fig. 8: Indoor experiment trajectory. Target speed: 0.8 km/h.

V-B2 Outdoor Environment with Wind Speed

We then conduct dynamic target landing experiments in an outdoor environment with wind speed, as shown in Fig. 9.

Fig. 9: Outdoor environment.

Similar to the indoor experiment, the UGV moves at a linear speed of 2.16 km/h and 3.24 km/h, respectively; the UAV first takes off to a preset point , then follows, and when an external landing command is received, the UAV quickly lands. The trajectory of the UAV-UGV system is shown in Fig. 10 and the critical data is also shown in Tab. II.

Fig. 10: Outdoor experiment trajectory. Target speed: 3.24 km/h.

V-B3 Trajectory Analysis

It can be seen from the trajectory graph that the trajectory in the simulation is relatively smooth. Although the motion planner gives a smooth reference trajectory, the actual flight trajectory of the UAV in the real situation slightly oscillates. This is because the robot’s actuators, including motors, have uncertainties from control noise, wear, and mechanical failures [15].

V-C Obstacle Avoidance Demonstration

We use the local motion planner in [17] and provide the UAV with collision-free, smooth and feasible trajectories. In the experiment, there is a obstacle in front of the UAV. The UAV bypasses the obstacle, and then sees and lands on the platform, as shown in Fig. 11.

Fig. 11: Obstacle avoidance and trajectory update demonstration.

Crucial planning data is shown in Tab. III, where the Obstacle Density is the proportion of obstacles per square meter.

Flight Time Flight Distance Obstacle Density Planning Time Re-planning Times
58 s 10.2 m 0.11 2.40 ms 5 times
TABLE III: Obstacle Avoidance

V-D Parameters Setting Analysis

We analyze and tune the prime parameters of the motion planner, as shown in Tab. IV. Resolution is the resolution of the grid map of the surrounding environment built from the depth information and Obstacles Inflation is the relative size of the inflation of the obstacles.

is the coefficient of each loss item in the loss function described in IV. For the sake of safety, the collision

in our paper is set relatively large.

Resolution Obstacles Inflation
0.15 0.299 1.0 8.5 0.1 1.0
TABLE IV: Planner Parameters

V-E Comparison Results

We compare our method with previous work in autonomous landing to verify the effectiveness of our method. The relevant comparison results are shown in Tab. V. Compared with the previous methods, our system can realize the autonomous take-off and landing of a quadrotor on a UGV platform in different scenarios without external positioning and can avoid static obstacles simultaneously.

Work Target Speed Obstacle Avoidance External Localization Environment
Sani et al. 0.0 km/h No No need Indoor
Lee et al. 0.25 km/h No No need Indoor
Mellinger et al. 0.0 km/h No Need Indoor
Daly et al. 3.6 km/h No Need Outdoor
Ours 3.24 km/h Yes No need Indoor/Outdoor
TABLE V: Comparison Results with Related Works

Vi Conclusions and Future Work

In this paper, we present a quadrotor system in the above sections. In order to detect the target quickly and accurately, we design an AruCo-based landing pad system. We utilize a gradient-based local motion planner to rapidly generate collision-free, smooth, and kinodynamic feasible reference trajectories. We then present an automatic state machine to achieve full autonomy in take-off and landing tasks. Experiments show that our system could realize the autonomous landing task on a mobile platform in real outdoor environments.

In the future, we will focus on more high-speed and more uncertain moving target landing tasks, while focusing on improving the accuracy of landing.

Acknowledgment

We appreciate Pengqin Wang and Delong Zhu for their constructive advice.

References

  • [1] A. Borowczyk, D. Nguyen, A. P. Nguyen, D. Q. Nguyen, D. Saussié, and J. Le Ny (2017) Autonomous landing of a quadcopter on a high-speed ground vehicle. Journal of Guidance, Control, and Dynamics 40 (9), pp. 2378–2385. Cited by: §I.
  • [2] G. Bradski and A. Kaehler (2008)

    Learning opencv: computer vision with the opencv library

    .
    " O’Reilly Media, Inc.". Cited by: §III-B.
  • [3] J. M. Daly, Y. Ma, and S. L. Waslander (2015) Coordinated landing of a quadrotor on a skid-steered ground vehicle in the presence of time delays. Autonomous Robots 38 (2), pp. 179–191. Cited by: §I.
  • [4] S. Gupte, P. I. T. Mohandas, and J. M. Conrad (2012) A survey of quadrotor unmanned aerial vehicles. In 2012 Proceedings of IEEE Southeastcon, pp. 1–6. Cited by: §I.
  • [5] D. Lee, T. Ryan, and H. J. Kim (2012) Autonomous landing of a vtol uav on a moving platform using image-based visual servoing. In 2012 IEEE international conference on robotics and automation, pp. 971–976. Cited by: §I.
  • [6] T. Li, C. Wang, C. W. de Silva, et al. (2019) Coverage sampling planner for uav-enabled environmental exploration and field mapping. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2509–2516. Cited by: §I.
  • [7] L. Meier, D. Honegger, and M. Pollefeys (2015) PX4: a node-based multithreaded open source robotics framework for deeply embedded platforms. In 2015 IEEE international conference on robotics and automation (ICRA), pp. 6235–6240. Cited by: §II-B.
  • [8] L. Meier, P. Tanskanen, F. Fraundorfer, and M. Pollefeys (2011) Pixhawk: a system for autonomous flight using onboard computer vision. In 2011 IEEE International Conference on Robotics and Automation, pp. 2992–2997. Cited by: §II-B.
  • [9] D. Mellinger, M. Shomin, and V. Kumar (2010) Control of quadrotors for robust perching and landing. In Proceedings of the International Powered Lift Conference, pp. 205–225. Cited by: §I.
  • [10] P. Pounds, R. Mahony, and P. Corke (2006) Modelling and control of a quad-rotor robot. In Proceedings of the 2006 Australasian Conference on Robotics and Automation, pp. 1–10. Cited by: §II-A.
  • [11] Y. Qi, J. Jiang, J. Wu, J. Wang, C. Wang, and J. Shan (2019) Autonomous landing solution of low-cost quadrotor on a moving platform. Robotics and Autonomous Systems 119, pp. 64–76. Cited by: §III-B, §III-C.
  • [12] M. I. Ribeiro (2004) Kalman and extended kalman filters: concept, derivation and properties. Institute for Systems and Robotics 43, pp. 46. Cited by: §II-C.
  • [13] F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer (2018) Speeded up detection of squared fiducial markers. Image and vision Computing 76, pp. 38–47. Cited by: §I, §III-B.
  • [14] M. F. Sani and G. Karimian (2017) Automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors. In 2017 international conference on computer and drone applications (IConDA), pp. 102–107. Cited by: §I.
  • [15] S. Thrun (2002) Probabilistic robotics. Communications of the ACM 45 (3), pp. 52–57. Cited by: §V-B3.
  • [16] Z. Zhang (2000) A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22 (11), pp. 1330–1334. Cited by: §III-A.
  • [17] X. Zhou, Z. Wang, H. Ye, C. Xu, and F. Gao (2020) Ego-planner: an esdf-free gradient-based local planner for quadrotors. IEEE Robotics and Automation Letters 6 (2), pp. 478–485. Cited by: §I, §IV, §V-C.
  • [18] X. Zhou, J. Zhu, H. Zhou, C. Xu, and F. Gao (2021) Ego-swarm: a fully autonomous and decentralized quadrotor swarm system in cluttered environments. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 4101–4107. Cited by: §IV.
  • [19] D. Zhu, Y. Du, Y. Lin, H. Li, C. Wang, X. Xu, and M. Q. Meng (2017) Hawkeye: open source framework for field surveillance. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6083–6090. Cited by: §I.