Duckiefloat: a Collision-Tolerant Resource-Constrained Blimp for Long-Term Autonomy in Subterranean Environments

10/31/2019 ∙ by Yi-Wei Huang, et al. ∙ 0

There are several challenges for search and rescue robots: mobility, perception, autonomy, and communication. Inspired by the DARPA Subterranean (SubT) Challenge, we propose an autonomous blimp robot, which has the advantages of low power consumption and collision-tolerance compared to other aerial vehicles like drones. This is important for search and rescue tasks that usually last for one or more hours. However, the underground constrained passages limit the size of blimp envelope and its payload, making the proposed system resource-constrained. Therefore, a careful design consideration is needed to build a blimp system with on-board artifact search and SLAM. In order to reach long-term operation, a failure-aware algorithm with minimal communication to human supervisor to have situational awareness and send control signals to the blimp when needed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation

Fig. 1: We present an autonomous blimp Duckiefloat for the use of search and rescue (SAR) mission. Compared to quadcopter drone, our blimp is able to achieve longer flight time and is collision-tolerant. The proposed system enables on-board artifact search, visual odometry, and communications to base station for artifact report and situational awareness. Duckiefloat was used in the tunnel circuit of the DARPA Subterranean Challenge.

Blimps are widely used in the 1900’s after its first invented. It was used both in civilian and military transportation needs. But after the appearance of airplanes, most of them are replaced by airplanes due to the safety of using hydrogen and the high cost of helium gas.

Recent success of unmanned aerial vehicles (UAVs) of quadcopter drones (referred to drones) has make them the most popular UAVs in recent years. It’s capable of maneuvering fast and precisely and can carry up to several kilograms in a comparably small body frame. However, the main drawback of drones is that they are power hungry. Drones have only limited flight time (16-40 minutes for a commercially available drone [sa2017build]), and are prone to collisions and will fail completely if one of the rotor blades are non-functional. Such constraints makes drones limited for search and rescue missions, which take a couple of hours or even days.

Fig. 2: System Overview of Duckiefloat.

Blimps have some advantages over drones, and the main upper hand is that it has low-power consumption and is collision-tolerant. It is possible that blimps operate for days and even for weeks in a single charge due to its lighter than air (LAT) characteristics. They are also safe to navigate in unstructured environments. Some collisions aren’t a big deal due to its non-rigid body. This is important for SAR tasks because it needs to be robust and able to recover from any failure cases. Therefore it is well-suited for the search and rescue contexts, such as in the DARPA Subterranean (SubT) Challenge [SubT-website]. The main focus of SubT challenge is to encourage robotic research for autonomous systems in subterranean environments. Teams have to finish the task of searching and reporting locations of several types of artifacts including survivors and several items that commonly used by explorers.

Deploying the autonomous blimps as scouting vehicles is a good choice from the operating-time perspective as well as more collision-tolerant than drones in underground environments. Our main goal is to build an autonomous blimp that serves as a scouting robot in a multi-robot system for completing the tasks in the DARPA SubT Challenge, as shown in Fig. 1. Due to some constrained passages in subterranean environments, blimps have a size limit which is directly related to the weight limit of the payload. Therefore, Duckiefloat is also resource constrained without full-sized computers and heavy sensors. A careful design consideration is needed and will be discussed in Sec. III.

This work contributes the follows:

  • We realized the idea of long-term autonomy via developing a blimp robot “Duckiefloat” with the ability of performing SLAM and artifacts search via deep learning approach.

  • We implemented a module that provides human supervisors situational awareness and failure recovery mechanism by human if needed.

  • We tested Duckiefloat in real tunnel environments, and refined the experience as the future directions of search-purposed blimp research.

Ii Related Work

Ii-a Search and Rescue (SAR) and Underground Robots

Autonomous systems will play an essential role in search and rescue applications. Currently, most of the jobs are still done by human beings because in many disaster response cases the sites are unstructured with a lot of debris. They are challenging for robots to perform autonomous tasks. Different types of robots (air, ground, water) have been deployed for search and rescue missions [delmerico2019current]. Recent research on search and rescue robots has been also motivated by building robot systems that can actually provide help in natural disasters, such as earthquakes. These robot systems have to tackle challenges including: navigating through unknown and unstructured environments, bad communication, lack of illumination. Some robots [wang2014development] are designed as ground vehicles with special tracks and suspension systems. This allows them to tackle the mobility problems and adapt to rough terrains. There are some attempts of including autonomy in robotic systems for several years. Gu et al. built a robot system for returning samples from a large outdoor environment and won NASA’s Sample Return Robot Challenge in 2014, 2015, 2016  [gu2018robot]. Nirouni et al. [niroui2019deep]

used a deep reinforcement learning method for robot search and rescue applications in unknown cluttered environment. However, most of other works are tele-operated, and successful real-world demonstrations of autonomous robots are still rare.

Similarly, the robotic systems operating in subterranean environments have several challenges. There are pipe inspection robots [montero2015past] designed to detect tunnel defects like erosion and cracks on artificial structures. However due to challenges on autonomy, most robots are tele-operated and still needed operators/workers on-site, which makes operators exposed in danger. TunConstruct [chmelina2007development] and ROBINSPECT [loupos2014robotic] are two of the few pipe inspection robotic systems that is close to full autonomy. ROBINSPECT uses laser scanners, ultrasonic sensors, cameras and a robotic arm to operate navigation and inspection tasks. There are also a few research about robot systems in underground mines.  [roberts2000autonomous] proposed a reactive autonomous navigation algorithm to perform lane following in a mine tunnel without a map.  [shaffer1992robotic]

designed an autonomous mining robot using a 2D laser scanner and triangulation to estimate its pose.

Another big challenge for those search and rescue robot systems in underground environments is localization. The GPS–denied and visually degraded environment caused several common GPS/vision based localization algorithms unusable. State-of-the-art SLAM (Simultaneous Localisation and Mapping) algorithm ORB-SLAM [mur2017orb] has been shown to perform displeased in underground mine site environments [jacobson2018semi]. Zeng at el.  [zeng2019lookup]

proposed an visual-based localization algorithm using optical flow and a pixel correspondence match to reject outliers. Huber et al. 

[huber2003automatic] used 3D LiDAR mounted on a cart to perform 3D underground mine mapping. Using a graph optimization with a global consistency measure to detect and avoid incorrect, but locally consistent matches. Xiong et al. [xiong2009integrated] designed an integrated localization system for robots in underground environments using only IMUs. However, the problems of autonomy, perception, communication, and mobility are still challenging, which motivates the DARPA SubT Challenge[SubT-website] we are participating.

Ii-B Autonomous Blimp Systems

Our proposed system to the DARPA SubT Challenge is an autonomous blimp, which is low-power consumption due to its non-rigid body and the energy-free floating (lighter than air, LAT) characteristic, making it perfectly suited for search and rescue tasks. Several researches about blimp control using model-based methods [gonzalez2009developing],  [fedorenko2016indoor] and using model-free reinforcement learning  [rottmann2007towards],  [ko2007gaussian]. Vision-based SLAM is also implemented on blimps [hygounenc2004autonomous]. Applications like surveillance system have been built by using visual tracking technologies [fukao2003autonomous].

Fig. 3: Hardware Schematic Diagram of Duckiefloat.

Given the constrained passages in subterranean environments, the proposed blimp system is resource-constrained. A survey [berger2014comparison] is done about the hardware and software design perspective of resource constrained robots. One common trait is that these type of robots has a separation between low/high-level functions. For instance, central processing units are responsible for high-level tasks like communication, mapping and planning, and micro-controllers are connected with sensors and actuators to perform low-level tasks. For hardware design, power consumption, communication and robustness for maneuverability are also very critical since they are the basic functions of robots. Our designs and functionalities (lane/tunnel following and altitude controls) are inspired by the low-cost, resource constrained multi-robot platforms Duckietown [paull2017duckietown] and Duckiefly [brand2018pidrone].

Iii System Descriptions

Our goal is to develop an autonomous blimp systems, Duckiefloat, for search and rescue task in DARPA Subterranean Challenge. We summarize the requirements from the qualification and actual competition as follows:

Iii-a Requirements

Iii-A1 Autonomous navigation in uncertain environments

The robot can perform takeoff, landing, and is able to traverse through the course including at least two ninety-degree turns. The robot should also pass a constrained passage smaller than a certain dimensions.

Iii-A2 Perception in low-light or dusty environments

The robot should autonomously identify artifacts with its own on-board lighting while navigating in no-light environments. It is also needed to carry out accurate localization (smaller than 5 meters deviations from ground truth).

Duckiefloat Quadcopter[sa2017build]
Dimension Body 65 (diagonal)
(cm) Propeller 8 (diameter) 33 (diameter)
Payload Floating 1600 -
(g) Max Takeoff 1800 3600
Flight Time No LED 60 - 90 16-40
(min) With LED 40 - 50 -
Cost (USD) Price 1,620 3,170
TABLE I: Duckiefloat vs. Quadcopter

Iii-A3 Long-term operations with limited computation/power resources

The duration of each competition run is 1 hour for the tunnel circuit, and will be 1-2 hours in the urban and cave circuits. The power should provide flight time with sufficient payload, mobility, and on-board illumination.

Iii-A4 Communications and supervisor interference

Maintaining communication is ideal in order to submit artifact reports. While the communication bandwidth is available, mapping capability is useful for human supervisor to gain situational awareness.

Subsystem Item Weight
Sensors Intel D435, Infrared Sensor 117g
Computation Raspberry Pi 3 B, Jetson Nano 248g
Actuation 4 DC motors, Shell 172g
Communication LoRa module, XBee module 92g
Motor Controller Adafruit DC motor hat 50g
Power 7.4V 5500 mAh Li-PO Battery 227g
Body Envelope, Platform, Tails 150g
Illumination 6W LED light 50g
Waterproof Waterproof tape 120g
Other wires, adhesives, converters, tags 217g
Total 1,423g

Weights are then added to make Duckiefloat slightly heavier than air.

TABLE II: Duckiefloat Component Weights

Table I lists the comparisons between Duckiefloat and the commercially available quadcopter drone described in [sa2017build], including payload, maximum takeoff weights, flight time, and etc. The flight time of Duckiefloat is designed to be longer than 60 minutes. Nevertheless, we found the onboard illumine and communication module consume significant battery life.

Iii-B Assumptions

The vehicle’s motion control is divided to altitude controls and planar movements. They are assumed as independent motions. The motor configuration is a basic differential drive, allowing linear and angular accelerations. We also assume the environmental airflow is limited. The ideal tunnel dimensions are more than 2 meters in height, and approximate 3 meters wide.

Iii-C System Overview

We design Duckiefloat with dimensions of , which fits the assumed tunnels. Filled with helium gas, it carries around 1,600 grams, which is able to carry all electronic devices and batteries(Table. II).

The system overview is shown in Fig. 2. We used a Raspberry Pi 3B with an Adafruit DC Motor Hat as the motor controller. A NVIDIA Jetson Nano is the main computing unit which is responsible for perception and other high-level tasks. Furthermore, Duckiefloat is equipped with an infrared sensor that measures the altitude of the blimp.

We designed a cross-shaped Styrofoam platform, attached under the blimp to carry sensors, motors, and controllers, shown in Fig. 3. The baseline functionalities are tunnel following and artifact searching in dark environments. Therefore we need a camera which contains depth information that can also be able to work in low light conditions, and also combined the light weight and low computation requests. We used Intel RealSense D435 depth camera as our main sensor. At the same time, the Intel RealSense D435 also provides us data for visual odometry in order to build our map and localize the robot in the tunnel.

Iii-D Mobility

Iii-D1 Challenges

Blimp has unique dynamics than other aerial vehicle, and has more in common with submarine robots due to the buoyancy, mass effects, and their aerodynamics.

Iii-D2 Algorithms: Altitude Controls

We used PID controller to perform the altitude control, and set the goal altitude to 0.6 meter, which Duckiefloat would fly on a constant altitude.

Iii-D3 Algorithms: Tunnel Following

In the tunnel circuit, we implement a baseline tunnel following policy for the blimp to perform exploration. By analyzing the pointcloud gathered from the RGB-D camera, we find the points of interest and project them to the plane parallel to the ground. Then we search line segments in the image and classify them into right, left or front wall. The slopes and intercepts of lines of different wall classes are then interpreted to a state of the blimp. The robot state at time

is represented as , where is the lateral distance between the blimp and the center of the tunnel at time and is the angle relative to the tunnel axis. Then a PID controller is used to control the robot state with target and which makes the blimp stay at the center of the tunnel with its yaw angle parallel to the tunnel.

Iii-D4 Implementations

We designed the propulsion system with two DC motors to control the horizontal movements, and another two DC motors to control the vertical movements. The two sets of DC motors are perpendicular to each other.

Iii-E Perception

Iii-E1 Challenges

Lighting in these subterranean terrains are very limited. Only some light are emitted from the vehicle itself. So it is crucial for the algorithm have some robustness to different lighting conditions. The deep learning algorithm is also constrained by the computation and memory constraint of embedded boards.

Iii-E2 Algorithms

In our specific task for DARPA SubT Challenge, we use SSD [liu2016ssd] to perform artifact searching. However computation and memory constraints limit the capability to run full size deep learning models. Therefore, Mobilenet-SSD [howard2017mobilenets] with 1GB of model size are perfect for our case.

Iii-E3 Implementations

The main computation unit on Duckiefloat is Nvidia Jetson Nano, an embedded board with on-board GPU with 4GB shared memory and comparably low power consumption (10 watt) to other boards with a GPU. For the artifact search of the SubT challenge defined 5 classes of objects, including survivors, backpacks, cellphones, drills ,and fire-extinguishers. For training data we self-gathered 1300 images per class including 5 different view angles, 3 different view distances and different illumination conditions.

Iii-F Communication

Iii-F1 Challenges

Communication is the key challenge in the competition. The multi-path problem and dimensions and depths of the tunnels affect the communication performances.

Iii-F2 Implementations

In order to have a longer range of communication in underground tunnel environments, we integrated the LoRa [vangelista2015long] communication module on Duckiefloat. The module provides long-range and low-power peer-to-peer communication between the base station and Duckiefloat. It allows modification on the bandwidth which affects data rate and communication range. We choose the bandwidth to be around 125KHz which the low data rate is an acceptable trade-off for a better communication range. The low data rate is sufficient to update the robot state and a sparse 2D range image at the base station at 1Hz.

Iii-G Autonomy

Iii-G1 Challenges

Mapping and localization are harder to achieve in these underground environments due to the roughness of terrains and sometimes lack of distinct features (textures) which may cause failures for off-the-shelf SLAM algorithms.

Iii-G2 Algorithms

We perform ORB-SLAM [mur2017orb] algorithm for on-board visual odometry. During our on-site field tests, we found it vulnerable to fast movements (motion blur under low light) or illumination changes. Therefore we perform fail-safe mechanisms to detect fail cases(ie. visual odometry is lost) and recover to previous state.

Iii-G3 Implementations

In addition to the fail-safe mechanism, we consider that the system is not required to be fully autonomous in the challenge, as long as the communication maintains. We implement the minimal data representing the robot’s surroundings. We take a slice of point cloud gathered by RGB-D camera and divide the angle of field of view into 8 bins. Then we only send the closest points in each of the 8 bins, resulting with 8 points(or less if no points in some bin). These points are sent to the base station at 1 Hz, providing the human supervisor some form of situational awareness. Then the base station could send moving signals to Duckiefloat if recovery is needed.

Fig. 4: Left: Duckiefloat camera view. Right: Points representing situational awareness

Iv Experiments

Several experiments are designed to test different aspects of Duckiefloat.

  • Autonomous vs. Human Controlled

  • Situational Awareness and Failure Recovery

  • Artifact Search

Iv-a Autonomous vs Human Controlled

Most current robots designed for search and rescue operations have little or no autonomy at all, but instead they rely on operators remotely controlling them. However, having full control of the robot means having a very stable and reliable communication.

We setup a testing field in the room with a s-shaped track, with one right and left u-turn. The path is around 3.3 meters wide. The room setup is as Fig. 5. (left)

Fig. 5: Left: The S-shaped testing track setup. Trajectories of autonomous tunnel following and RC-controlled. Middle: Red circle shows human recovery Right: Turbulence causes collisions.

In this experiment we want to show the differences between autonomous navigation and human tele-operated. The autonomous runs are based on the algorithms described in Sec.III part D. For human tele-operation, a stable 5GHz WiFi connection is established among Duckiefloat and the base station. The human supervisor stays at the base station and sends control signals to Duckiefloat according to the RGB image feed. Five runs are conducted for each remote-controlled (RC) and autonomous mode. Ultra-wideband (UWB) localization system is installed in the room and on Duckiefloat to provide a reference trajectory in this experiment. Ultra-wideband (UWB) localization system has a mean absolute error of around 9.4 cm and an average standard deviation of 2 cm 

[Conceio2018RobotLI]. 5 points on the middle line of the track is picked for trajectory bench-marking. The lateral distance between the reference points and the trajectory is calculated then averaged.

The results are shown in Table. III. The results are calculated by averaging 5 runs. Both of the different methods have around 0.3 meters of trajectory error but the autonomous runs have a significant bigger standard deviation. This could be interpreted that a human operator may have more consistent driving path, especially turning u-turns. RC runs also finish the runs  10 seconds faster than autonomous runs. It is quite obvious that human supervisors can take advantage from their experiences for better velocity control. Lastly, RC runs have less collision occurring than autonomous runs. Note: We only consider the autonomous runs that successfully travels through the whole track.

RC Auto Auto w/ airflow
Trajectory Error(m)
Duration(sec) 72.6 85.05 112.0
Collision Count 4.0 5.2 7.7
Number of Runs 5 5 3
TABLE III: Experiment Results

From the overall performance, the RC method slightly outperforms the autonomous method. However, it is only possible to perform fully-RC if robots are in a controlled environment which has stable and reliable wireless connection.

Iv-B Situational Awareness and Failure Recovery

In real world scenarios, in particularly subterranean environments, communication is usually unreliable and unstable. However, we could still leverage the communication which has low bandwidth and high latency to provide the operator some information about the blimp robot and even send signals in order to recover the robot if its fails at some point. From the previous experiments there are actually quite a few runs that Duckiefloat gets stuck at corners. In these cases we try to test whether the human supervisor can recover it only by reading situational awareness information at a low frequency(1 Hz). Among the 29 corners Duckiefloat encountered, 25(86%) corners can be traversed by autonomous mode. 3(10%) of the remaining 4 corners can be recovered by human intervention, 1(0.3%) can’t be recovered. Human recovering actions are mostly comparably complex . Fig. 5 shows some cases of human recovering, including strong turns and even backing up. For the cases human couldn’t even recover, Duckiefloat is stuck to deep on some structures.

To prove the usability of the failure recovery system, we let Duckiefloat operate in the corridors of our building.

We let it run for as long as possible and with the help of human recovery constantly exploring new areas. As a result Duckiefloat operated for 47 minutes and covered around 500 meters, including going up the stairs autonomously. In the run, there are 8 situations that human recovery is needed. Those cases include corners and constrained paths(Fig. 6).

Iv-C Artifacts Search

In the aspect of artifacts search, we compared the performance of MobileNet-SSD [liu2016ssd] and Tiny-YOLO [redmon2016You]

network. We trained both network with the exact same set of training data and chose the model with the best performance in training epochs. From our self-gathered test data, Tiny-YOLO and MobileNet-SSD resulted an AP(IOU=0.5) of 0.48 and 0.74 respectfully. We found out that Tiny-YOLO is much more sensitive to smaller objects or objects further from cameras which covered smaller pixel region in images. This gives the advantage of finding smaller objects like cellphones but on the other hand generates a lots more false positive results. We also found out that Tiny-YOLO from time to time classified objects into wrong classes.

V Real Environment Tests

Previous experiments are all carried out in indoor and controlled environments in order to get better knowledge of the system. However, the robot system is designed to operate in real subterranean environments. We tested the system in two tunnel environments: Houli Tunnel and The National Institute for Occupational Safety and Health (NIOSH) coal mine tunnel.

Houli Tunnel is an old train tunnel built in 1908. The total length of the tunnel is 12.69 kilometers with approximately 4m in width. The tunnel is structure in a large long carve with no branch road along the way. Our Duckiefloat is tested in the tunnel and succeeded to operate for an average of one hour which travelled about 300 meters in each fully autonomous run.

The DARPA SubT Challenge is held at NIOSH, Pittsburgh. Two mine tunnels are maintained for research purposes. The tunnels extend many kilometers in length and include highly constrained passages with a rough and muddy floor. 40 artifacts are placed randomly in the tunnels. We deployed our blimp robot in the tunnels but unfortunately the constrained paths limited our blimp to perform the searching task. Duckiefloat is stuck at some places and our system successes to recover Duckiefloat and pilot it back to the mine entrance.

Fig. 6: Testing in real-world environments. Top: Scenarios Duckiefloat got stuck and could be recovered by human. Bottom left: Duckiefloat at NIOSH tunnel in the DARPA SubT Challenge. Bottom right: Human supervisor was monitoring situational awareness at base station.
(a) Houli Tunnel
(b) EE Building
(c) Air-raid Shelter
(d) Comparison in Distance
(e) Comparison in Turns
Fig. 7: LoRa communication capability in five different tunnel or tunnel-like environments. To row shows our sampling points. Green dots are Base Stations and Red dots are locations that signals fail to reach Base Stations. (d) shows how distance affects LoRa signals in different environment. (e) shows how structure in different environments, i.e. turns, affects LoRa signals.

Communication is essential between Duckiefloat and Base Station in real world to provide situational awareness to human supervisors. We compared and assessed the communication capability of LoRa modules inside five different environments (Fig.7). We analyzed the successful transfer ratio of LoRa packets within a time range as the capability of communication. We divided environments into two subsets. One set of them are those with longer paths which we analyzed how long that LoRa signal could travel within these environments. Other environments are smaller but more complicated which we analyzed how much turns LoRa signal could penetrate through.

In the first set of environments, as Fig.7(d) shows, LoRa signals can travel longer and with higher packets successful transfer ratio inside Houli Tunnel. We surmised that it is the size of Houli Tunnel makes the signal transfer easier. While three tunnels are roughly in same width, Houli Tunnel are 3 times higher than the other two tunnels. Another reason could be the material inside tunnels or within the wall of tunnels. Metal causes Shielding Effect which weaken penetrating power of electromagnetic field. Thus the metal structures inside tunnel crippled LoRa signals.

In EE Building and the air-raid shelter, we analyzed amount of turns that signals can penetrate through (Fig.7(e)). We defined a turn as the path turning right and left. Although EE Building has wider path and higher ceiling, the signal went through more turns inside the air-raid shelter. We surmised that the long lateral distance (50m) of the first turn of EE Building weakens signals. On the other hand the accumulative lateral distance of air-raid shelter is only 5 meter. Thus we concluded that if a turn happened, the larger the lateral distance the weaker LoRa signals would become while traversing through.

Vi Conclusions

Inspired by DARPA SubT Challenge, we developed a search and rescue blimp robot ”Duckiefloat” with the abilities of performing SLAM and artifacts search via deep learning approach. Duckiefloat provide situational awareness so that human supervisors can gain information of unknown environments for intervention and failure recovery if necessary. We carried out experiments in real-world tunnel or tunnel-like environments with quantitative and qualitative results, providing insights for future directions of development search-purpose blimp robots.

We performed experiments to compare the flying performance in control airflow (1.2m/s) and windless environments. The results indicate the number of collisions increase, and overall speed decreases. During our real world test in Houli Tunnel and NIOSH Tunnel, our Duckiefloat system encounter some turbulence, Duckiefloat got stuck by the airflow and hard to control. We will change the configurations of the motors or replace our current motors in the future.

Acknowledgments

The research was supported by Ministry of Science and Technology, Taiwan (grant 107-2923-E009-004-MY3, 108-2218-E-007-039, 108-2321-B-009-004).

References