DeepAI
Log In Sign Up

Mapless Navigation of a Hybrid Aerial Underwater Vehicle with Deep Reinforcement Learning Through Environmental Generalization

Previous works showed that Deep-RL can be applied to perform mapless navigation, including the medium transition of Hybrid Unmanned Aerial Underwater Vehicles (HUAUVs). This paper presents new approaches based on the state-of-the-art actor-critic algorithms to address the navigation and medium transition problems for a HUAUV. We show that a double critic Deep-RL with Recurrent Neural Networks improves the navigation performance of HUAUVs using solely range data and relative localization. Our Deep-RL approaches achieved better navigation and transitioning capabilities with a solid generalization of learning through distinct simulated scenarios, outperforming previous approaches.

READ FULL TEXT VIEW PDF

page 1

page 3

12/27/2021

Double Critic Deep Reinforcement Learning for Mapless 3D Navigation of Unmanned Aerial Vehicles

This paper presents a novel deep reinforcement learning-based system for...
08/05/2021

Deep Reinforcement Learning for Continuous Docking Control of Autonomous Underwater Vehicles: A Benchmarking Study

Docking control of an autonomous underwater vehicle (AUV) is a task that...
11/25/2020

Symmetry-Aware Actor-Critic for 3D Molecular Design

Automating molecular design using deep reinforcement learning (RL) has t...
10/05/2022

Neuro-Planner: A 3D Visual Navigation Method for MAV with Depth Camera based on Neuromorphic Reinforcement Learning

Traditional visual navigation methods of micro aerial vehicle (MAV) usua...

Supplementary Material

I Introduction

Several studies about Hybrid Unmanned Aerial Underwater Vehicles (HUAUVs) have been published recently [drews2014hybrid, neto2015attitude, da2018comparative, maia2017design, lu2019multimodal, mercado2019aerial, horn20, aoki2021]. These types of vehicles enable an interesting range of new applications due to their capability to operate both in the air and underwater. These include inspection and mapping of partly submerged areas in industrial facilities, search and rescue and others. Most of the literature in the field is still focused on vehicle design, with few published works on the theme of autonomous navigation [bedin2021deep]. The ability to navigate in both environments and successfully transit from one to another imposes additional challenges that must be addressed.

Lately, approaches based on Deep-RL have been enhanced to address navigation-related tasks for a range of mobile vehicles, including ground mobile robots [ota2020efficient], aerial robots [tong2021uav, grando2022double] and underwater robots [carlucho2018]. Based on actor-critic methods and multi-layer network structures, these approaches have achieved interesting results in mapless navigation, obstacle avoidance, even including media transitioning for HUAUVs [bedin2021deep, de2022depth]. However, the challenges faced by this kind of vehicle make these existing approaches still too limited, with poor generalization through different scenarios.

In this work, we present two new double-critic Deep-RL approaches in the context of HUAUVs to perform navigation-related tasks in a continuous state-space environment: (1) a deterministic approach based on Twin Delayed Deep Deterministic Policy Gradient (TD3) [fujimoto2018addressing]; and (2) a stochastic approach based on Soft Actor-Critic (SAC) [haarnoja2018soft]. We show we are capable of training agents that are consistently better than state-of-the-art in generalizing through different simulated scenarios, with improved stability in mapless navigation, obstacle avoidance and medium transitions. Our evaluation tasks included both air-to-water and water-to-air transitions. We compared our methods with other single critic approaches and with an adapted version of a traditional Behavior-Based Algorithm (BBA) [marino2016minimalistic] used in aerial vehicles. Fig. 1 shows a snapshot of our simulation environment.

Figure 1: Our HUAUV underwater in the first scenario (left) and its respective sonar readings (right).

This work provides the following main contributions:

  • We show that our agents present a robust capacity for generalization through different environments, achieving a good performance in a complex and completely unknown environment. The robot also performs the medium transition, being capable of arriving at the desired target and avoiding collisions.

  • We show that a Long Short Term Memory (LSTM) architecture can achieve better overall performance and capacity for generalization than the state-of-the-art Multi-Layer Perceptron (MLP) architectures.

This work has the following structure: the related works are discussed in the following section (Sec. II). Following it, we present our methodology in Sec. III. The results are presented in Sec. IV and discussed in Sec. V.

Ii Related Work

For more traditional types of vehicles, several works have been published demonstrating how efficiently Deep-RL can solve the mapless navigation problem [tobin2017domain]. For a ground robot, Tai et al. [tai2017virtual] demonstrated a mapless motion planner based on the DDPG algorithm employing a 10-dimensional range finder combined with the relative distance to the target as inputs and continuous steering signals as outputs. Recently, Deep-RL methods have also been successfully used by Ota et al. [ota2020efficient], de Jesus et al. [jesus2019deep, jesus2021soft] and others, to accomplish mapless navigation-related tasks for terrestrial mobile robots. Singh and Thongam [singh2018mobile] demonstrated efficient near-optimal navigation for a ground robot in dynamic environments employing an MLP to perform speed control while choosing collision-free path segments.

For UAVs, Kelchtermans and Tuytelaars [kelchtermans2017hard] demonstrated how memory could help Deep Neural Networks (DNN) for navigation in a simulated room crossing task. Tong et al. [tong2021uav] showed better than state-of-the-art convergence and effectiveness in adopting a DRL-based method combined with a LSTM to navigate a UAV in highly dynamic environments, with numerous obstacles moving fast.

When it comes to problems involving specifically mapless navigation for UAVs, few works examine the effectiveness of Deep-RL. Grando et al. [grando2020visual] explored a Deep-RL architecture, however, navigation was constrained to a 2D space. Rodriguez et al. [rodriguez2018deep] employed a DDPG-based strategy to solve the problem of landing UAVs on a moving platform. Similar to our work, they employed RotorS framework [furrer2016rotors] combined with the Gazebo simulator. Sampedro et al. [sampedro2019fully] proposed a DDPG-based strategy for search and rescue missions in indoor environments, utilizing real and simulated visual data. Kang et al. [kang2019generalization] also used visual information, although he focused on the subject of collision avoidance. In a go-to-target task, Barros et al. [2020arXiv201002293M] applied a SAC-based method for the low-level control of a UAV. Double critic-based Deep-RL approaches similar to the one proposed here have also been shown to yield good results[grando2022double].

The HUAUV literature is still mostly concerned with vehicle design and modeling [drews2014hybrid, neto2015attitude, da2018comparative, maia2017design, lu2019multimodal, mercado2019aerial, horn20]. Two works have recently tackled the navigation problem with the medium transition of HUAUVs [pinheiro2021trajectory], [bedin2021deep]. Pinheiro et al. [pinheiro2021trajectory] focused on smoothing the medium transition problem in a simulated model on MATLAB. Grando et al. [bedin2021deep] developed Deep-RL actor-critic approaches and a MLP architecture. These two works were developed using generic distance sensing information for aerial and underwater navigation. In contrast, our work relies on more realistic sensing data, with the simulated LIDAR and sonar being both based on real-world devices.

The HUAUV presented in this paper is based on Drews-Jr et al. [drews2014hybrid] model, which Neto et al. [neto2015attitude] has largely expanded. Our work differs from the previously discussed works by only using the vehicle’s relative localization data and not its explicit localization data. We also present Deep-RL approaches based on double critic techniques instead of single critic, with RNN structures instead of MLP, traditionally used for mapless navigation of mobile robots. We compare our approaches with state-of-the-art Deep-RL approaches and with a behavior-based algorithm [marino2016minimalistic] adapted for hybrid vehicles to show that our new methodology improves the overall capability to generalize through distinct environments.

Iii Methodology

In this section, we describe our simulation environment, our hybrid vehicle, and the proposed Deep-RL, detailing the network structure for both deterministic and stochastic agents. We also introduce the task that the vehicle must accomplish autonomously and the respective reward function.

Iii-a Deterministic Deep RL

Developing on the DQN [mnih2013playing], Deep Deterministic Policy Gradient (DDPG) [lillicrap2015continuous]

employs an actor-network where the output is a vector of real values representing the chosen action, and a second neural network to learn the target function, providing stability and making it ideal for mobile robots 

[jesus2019deep]. While it provides good results, DDPG still has its problems, like overestimating the Q-values, which leads to policy breaking. TD3 [fujimoto2018addressing]

uses DDPG as its backbone, adding some improvements, such as clipped double-Q learning with two neural networks as targets for the Bellman error loss functions, delayed policy updates, and Gaussian noise on the target action, raising its performance.

Our deterministic approach is based on the TD3 technique. The pseudocode can be seen in Algorithm 1.

1:   Initialize params of critic networks , , and actor network
2:   Initialize params of target networks , ,
3:   Initialize replay buffer
4:   for  to  do
5:       reset environment state
6:       for  to  do
7:           if  then
8:                env.action_space.sample()
9:           else
10:               
11:           end if
12:           , , , _ env.step()
13:           store the new transition into
14:           if  then
15:               Sample mini-batch of transitions from
16:               
17:               Computes target:
18:               Update double critics with one step gradient descent:   for i=1,2
19:               if t % == 0 then
20:                    Update policy with one step gradient descent: Soft update for the target networks:
21:                    
22:                       for i=1,2
23:               end if
24:           end if
25:       end for
26:   end for
Algorithm 1 Deep Reinforcement Learning Deterministic

We train for steps in episodes. Our approach starts by exploring random actions for the initial steps. We use an LSTM as the actor-network and as its target. The double critics are also LSTM networks, denoted by and , with and as their targets. The learning of both networks happens simultaneously, addressing approximation error, reducing the bias, and finding the highest Q-values. The actor target chooses the action based on the state , and we add Ornstein-Uhlenbeck noise to it. The double critic targets take the tuple (,

) and return two Q-values as outputs, from which only the minimum of the two is considered. The loss is calculated with the Mean Squared Error of the approximate value from the target networks and the value from the critic networks. We use Adaptive Moment Estimation (Adam) to minimize the loss.

We update the policy network less frequently than the value network, taking into account a factor that increases over time by the following rule:

Iii-B Stochastic Deep RL

We also introduce a bias-stochastic actor-critic algorithm based on SAC [haarnoja2018soft], that combines off-policy updates with a stochastic actor-critic method to learn continuous action space policies. It uses neural networks as approximation functions to learn a policy and two Q-values functions similarly to TD3. However, SAC utilizes the current stochastic policy to act without noise, providing better stability and performance, maximizing the reward and the policy’s entropy, encouraging the agent to explore new states and improving training speed. We use the soft Bellman equation with neural networks as a function approximation to maximize entropy. The pseudocode can be seen in Algorithm 2.

1:   Initialize params of critic networks , , and actor network
2:   Initialize params of target networks , ,
3:   Initialize replay buffer
4:   for  to  do
5:       reset environment state
6:       for  to  do
7:           if  then
8:                env.action_space.sample()
9:           else
10:               
11:           end if
12:           , , , _ env.step()
13:           store the new transition into
14:           if  then
15:               Sample mini-batch of transitions from
16:               
17:               
18:               
19:               Update double critics with one step gradient descent:
20:               if t % == 0 then
21:                    Update policy with one step gradient descent:
22:                    Soft update for the target networks:
23:                    
24:                       for i=1,2
25:               end if
26:           end if
27:       end for
28:   end for
Algorithm 2 Deep Reinforcement Learning Stochastic

Like before, here we train for () steps in () episodes as well, exploring random actions for the first () steps. An LSTM structure was used for the policy network . After sampling a batch from the memory , we compute the targets for the Q-functions , and update the Q-functions. Also, here we update the policy less frequently than the value network, using the same factor we used in our deterministic approach.

Iii-C Simulated Environments

Our experiments were conducted on the Gazebo simulator together with ROS, using the RotorS framework [furrer2016rotors] to allow the simulation of aerial vehicles with different command levels, such as angular rates, attitude, location control and the simulation of wind with an Ornstein-Uhlenbeck noise. The underwater simulation is enabled by the UUV simulator [manhaes2016uuv], which allows the simulation of hydrostatic and hydrodynamic effects, as well as thrusters, sensors, and external perturbations. With this framework, we define the vehicle’s underwater model with parameters such as the volume, additional mass, center of buoyancy, etc., as well as the characteristics of the underwater environment itself.

We developed two environments that simulate a walled water tank, with dimensions of 10106 meters and a one-meter water column. The first environment has four cylindrical columns representing subsea drilling risers. The second environment simulates complex structures, like those found in sea platforms, and contains several elements, such as walls, half walls and pipes (Figure  2 ).

Figure 2: Our HUAUV performing in the second scenario.

Iii-D HUAUV Description

Our vehicle was based on the model presented by Drews-Jr et al. [drews2014hybrid], Neto et al. [neto2015attitude] and et al. [horn2019study]. We described it using its actual mechanical settings, including inertia, motor coefficients, mass, rotor velocity, and others. A ROS package containing the vehicle’s description plus the Deep-RL agents can be found in the Supplementary Material.

The vehicle sensing was optimized to mimic real-world LIDAR and Sonar. The described LIDAR is based on the UST 10LX model. It provides a 10 meters distance sensing with ° of range and ° of resolution, simulated using the plugin ray of Gazebo. Our simulated FLS sonar was based on the sonar simulation plugin developed by Cerqueira et al. [cerqueira2017novel]. We described a FLS sonar with 20 meters of range, with a bin count of 1000 and a beam count of 256. The width and height angles of the beam were ° and ° , respectively. We obtained these values from the relative localization data using Rotors’ geometric controller. In the real world, localization information can be obtained from a combination of standard localization sensing of hybrid vehicles like Global Positioning System (GPS) and Ultra Short Baseline (USBL).

Iii-E Network Structure and Rewarding System

The structure of both our approaches has a total of 26 dimensions for the state, 20 samples for the distance sensors, the three previous actions and three values related to the target goal, which are the vehicle’s relative position to the target and relative angles to the target in the x-y plane and the z-distance plane. When in the air, 20 samples come from the LIDAR. We get these samples equally spaced by in the LIDAR. When underwater, the distance information comes from the Sonar. We also get 20 beams equally spaced among the total of 256, and we take the highest bin in each beam. This conversion based on the range gives us the distance towards the obstacle or the tank’s wall [Santos18, Santos19]. The actions are scaled between and for the linear velocity, from to for the altitude velocity and from to for the yaw.

Iii-E1 Reward Function

We proposed a binary rewarding function that yields a positive reward in case of success or a negative reward in case of failure or in case the episode () ends at the 500 steps limit:

(1)

The reward was set to 100, while the negative reward was set to -10. Both and distances were set to meters.

Iv Experimental Results

In this section, the results obtained during our evaluation are shown. During the training phase, we created a randomly generated goal towards which the agent should navigate. The agents trained for a maximum of 500 steps or until they collided with an obstacle or with the tank’s border. In case of reaching the goal before the limit of episodes, a new random goal was generated, allowing the total amount of reward to eventually exceed 100. A learning rate of was used, with a minibatch of 256 samples and the Adam optimizer for all approaches, including the compared methods. We limited the number of episodes trained to 1500 episodes. The limits for the episode number () were used based on the stagnation of the maximum average reward received.

For each scenario and model, an extensive amount of statistics were collected. The task addressed is goal-oriented navigation considering medium transition, where the robot must navigate from a starting point to an endpoint. This task was addressed in two ways in our tests: (1) starting in the air, performing the medium transition and navigating to a target underwater; and the other way around, (2) starting underwater, performing the medium transition and navigating to a target in the air. We collected the statistics for each of our proposed models (Det. and Sto.) and compared them with the performance of the state-of-the-art deterministic (Det.) and stochastic (Sto.) Deep-RL methods for HUAUVs, as well as a behavior-based algorithm [marino2016minimalistic] . These tasks were performed for 100 trials each and we recorded the total of successful trials, the average time for both underwater () and aerial (

) navigation and their standard deviations.

The models were all trained in the first environment and evaluated in both first (same as trained) and second (never seen) environments. We aim to outline one of the main contributions of this work, i.e. the robust capacity to generalize of our method across environments, in this case performing in a second, unknown and more complex environment. We set the initial position for the Air-Water (A-W) trials to (0.0, 0.0, 2.5) in the Gazebo Cartesian coordinates for the two scenarios. The target position used was (3.6, -2.4, -1.0). In both environments, the target was defined in a path with obstacles on the way. Table I shows the results obtained for each environment for 100 navigation trials.

Env Test (s) (s) Success
1 A-W Det. 94
1 A-W Sto.
1 A-W Sto. Grando et al. [bedin2021deep]
1 A-W Det. Grando et al. [bedin2021deep] 13.84 2.11 5.44 1.73 100
1 A-W BBA
1 W-A Det.
1 W-A Sto.
2 A-W Det. 8.44 9.09 73
2 A-W Sto. 14.89 1.120 94
2 A-W Sto. Grando et al. [bedin2021deep] - -
2 A-W Det. Grando et al. [bedin2021deep] - -
2 A-W BBA
2 W-A Det. 8.54 4.44 4.27 0.47 8
2 W-A Sto. 15.43 13.39 6.60 1.75 10
2 W-A Sto. Grando et al. [bedin2021deep] - -
2 W-A Det. Grando et al. [bedin2021deep] - -
2 W-A BBA
Table I: Mean and standard deviation metrics over 100 navigation trials for all approaches in all scenarios.

We also performed a complementary comparison in the second scenario. We used the models trained in the second environment to collect statistics. For a better analysis, we also performed a comparison between models in this second environment. First, we collected the data for Deterministic and Stochastic models trained only in the first environment for 1500 episodes (Env1), as shown before. Then, we trained these models for 500 more episodes in the second environment (Both). Lastly, we compared them with Deterministic and Stochastic trained only in the second environment for 1500 episodes. Table II shows the obtained results.

Model (s) (s) Success
A-W Det. (Env1) 8.44 9.09 73
A-W Sto. (Env1) 14.89 1.120 94
A-W Det. (Both) 14.14 3.77 8.69 3.17 99
A-W Sto. (Both) 100
A-W Det. (Env2)
A-W Sto. (Env2)
W-A Det. (Env1)
W-A Sto. (Env1)
W-A Det. (Both) 25.09 38.86 4.62 0.51 34
W-A Sto. (Both) 83
W-A Det. (Env2) - -
W-A Sto. (Env2)
Table II: Mean and standard deviation metrics over 100 navigation trials tested in the second simulated environment, for both deterministic and stochastic models trained only in the first environment (Env1), in both first and second environments (Both), and only in the second environment (Env2).

V Conclusions

The evaluation shows an overall increase in performance in navigation through both environments. It is possible to see that our approaches achieve a consistent performance of 100 successful air-to-water navigation trials with also a consistent navigation time ( and ). In this same scenario, the stochastic performed a little worse in air-to-water navigation but outperformed the deterministic approach in water-to-air navigation. In the second scenario, we can see more clearly that a double-critic-based approach with an RNN structure also has a better ability to learn and generalize the environment, including the obstacles and the medium transition. While the state-of-the-art approaches with a MLP structure were not capable of performing the task, our approaches presented once again a consistent performance, especially in air-to-water navigation. Our approaches showed an excellent ability to learn the tasks and the environmental difficulties, not only the scenario itself. That was further addressed in our additional evaluation with agents trained in the first environment only, both first and second environments and the second environment only. Overall, we can conclude that double critic approaches with recurrent neural networks present a consistent ability to learn through scenarios and environments and to generalize between them. Also, our approaches outperformed the BBA algorithm in the rate of successful trials and average time in almost all situations.

It is important to mention that these approaches are extensively evaluated in a realistic simulation, including control issues and disturbances such as wind. Thus, the results indicate that our approach may achieve real-world application if the correct data from the sensing and the relative localization are correctly ensured. Finally, it is also possible to analyze that these new RNN-based approaches provided a more consistent average course of action throughout the environments.The evaluation shows an overall increase in performance in navigation through both environments. It is possible to see that our approaches achieve a consistent performance of 100 successful air-to-water navigation trials with also a consistent navigation time ( and ). In this same scenario, the stochastic performed a little worse in air-to-water navigation but outperformed the deterministic approach in water-to-air navigation. In the second scenario, we can see more clearly that a double-critic-based approach with an RNN structure also has a better ability to learn and generalize the environment, including the obstacles and the medium transition. While the state-of-the-art approaches with a MLP structure were not capable of performing the task, our approaches presented once again a consistent performance, especially in air-to-water navigation. Our approaches showed an excellent ability to learn the tasks and the environmental difficulties, not only the scenario itself. That was further addressed in our additional evaluation with agents trained in the first environment only, both first and second environments and the second environment only. Overall, we can conclude that double critic approaches with recurrent neural networks present a consistent ability to learn through scenarios and environments and to generalize between them. Also, our approaches outperformed the BBA algorithm in the rate of successful trials and average time in almost all situations.

It is important to mention that these approaches are extensively evaluated in a realistic simulation, including control issues and disturbances such as wind. Thus, the results indicate that our approach may achieve real-world application if the correct data from the sensing and the relative localization are correctly ensured. Finally, it is also possible to analyze that these new RNN-based approaches provided a more consistent average course of action throughout the environments.

By using physically realistic simulation in several water-tank-based scenarios, we showed that our approaches achieved an overall better capability to perform autonomous navigation, obstacle avoidance and medium transition than other approaches. Disturbances such as wind were successfully assimilated and good generalization through different scenarios was achieved. With our simple and realistic sensing approach that took into account only the range information, we presented overall better performance than the state-of-the-art and classical behavior-like algorithm. Future studies with our real HUAUV are on the way.

Acknowledgment

The authors would like to thank the VersusAI team. This work was partly supported by the CAPES, CNPq and PRH-ANP.

References