1 Introduction
In contrast to the intense studies of deep Reinforcement Learning(RL) in games and simulations [1], employing deep RL to real world robots remains challenging, especially in high risk scenarios. Though there has been some progresses in RL based control in realistic robotics [2, 3, 4, 5], most of those previous works does not specifically deal with the safety concerns in the RL training process. For majority of high risk scenarios in real world, deep RL still suffer from bottlenecks both in cost and safety. As an example, collisions are extremely dangerous for UAV, while RL training requires thousands of times of collisions. Other works contributes to building simulation environments and bridging the gap between reality and simulation [4, 5]. However, building such simulation environment is arduous, not to mention that the gap can not be totally made up.
To address the safety issue in realworld RL training, we present the Intervention Aided Reinforcement Learning (IARL) framework. Intervention is commonly used in many automatic control systems in real world for safety insurance. It is also regarded as an important evaluation criteria for autonomous navigation systems, e.g. the disengagement ratio in autonomous driving^{2}^{2}2For details, see: Autonomous Vehicle Disengagement Reports. In this work, instead of using disruptive events such as collisions as feedbacks, we try to optimize the policy by avoiding human intervention. Though utilizing human intervention to avoid fatal mistakes in RL training has been applied to Atari games recently [6]
, our work is featured in real world applications. Besides, we propose a more general framework to take human intervention into account: We define the expression of the intervention as a combination of a unknown classifier deciding when to seize control and a reference control policy; We redefine the behavior policy as blending of the policy to be optimized and the intervention; Then we try to reduce the probability of intervention and learn from the reference policy at the same time.
In this work we are mainly interested in navigating an Unmanned Aerial Vehicle (UAV) in cluttered and unstructured environments. The agent is required to approach a specified goal at certain velocity without collision. Compared with traditional navigation, the proposed navigation policy is based on a more endtoend architecture. The policy model takes multimodal observations including 3D depth image, supersonic sensors, velocity, direction to the goal etc. as input and performs attitude control. We compared IARL with different learning algorithms including classic imitation learning and reinforcement learning in simulation, and tested IARL in the realistic UAV navigation task. We demonstrate that IARL learn to avoid collision nearly perfectly compared with the other baselines, even though it has never seen a single collision in the training process. In the meantime, IARL substantially reduces the human intervention more than the other methods. Furthermore, our method can be applied to various autonomous control systems in reality such as autonomous driving cars.
2 Related Works
There has been abundant works on visual navigation system[7], among which our work is closely related to mapless reactive visual navigation systems [8, 9]. Traditional reactive controllers either rely on artificially designed rules, such as boundary following, which highly adapts to specific obstacles [10, 8], or use empirical models such as Artificial Potential Fields (APF [11, 12]), which builds virtual potential fields for collision avoidance. The above works requires artificial tuning of a lot of hyperparameters and handwritten rules. Some of those methods show competitive performance in specific scenarios, but they are not easy to be extended to unstructured scenarios.
Imitation Learning takes an important role in intelligent controllers in recent years, where policy is learned by reducing the difference between itself and a reference policy(typically the human demonstration). Among them, the supervised learning (or behavior cloning) generates the training trajectories directly from the reference policy
[13, 14]. It is relatively simple, but the performance is usually unstable due to the inconsistency between the training and testing trajectories. Ross et al. proposed the Data Aggregation(DAgger [15]) as an improvement to supervised learning, by generating trajectories with the current policy. Their subsequent work [16] successfully employed DAgger to collisionfree navigation on an UAV. Yet, DAgger remains expensive, and it requires a tricky labeling process that is hard to operate in practice.Our work is also related to recent progresses in reinforcement learning, especially the deterministic policy gradient dealing with continuous control [17, 18, 19]. We use Proximal Policy Optimization(PPO) [19]
with General Advantage Estimation
[20](GAE) as baseline, which has been widely accepted for its robustness and effectiveness. However, most RL algorithms can not be directly deployed to the high risk scenarios due to the safety issue, thus we made the comparison in simulation only. In addition, we also use the trick of combining imitation learning and reinforcement learning to accelerate the training process, which is closely related to the previous work of [21, 22].3 Algorithms
3.1 Backgrounds
In this part we introduce the notations as well as the representations of different benchmarking algorithms. A Markov Decision Process(MDP) is described by
, where and represent the state and action space, represents the rewards, which is typically the real number field, and represents the transition probability. We use the notation to represent the time step, where is the length of an episode. , and denote the specific state, action and reward at step . We use the notation to represent the deterministic policy that we want to optimize. We have , with representing the trainable parameters to be optimized. A more general stochastic policy is defined as a distribution over. Usually a Gaussian distribution
is used for stochastic policy in continuous action space, which can be written as . can either be a hyperparameter or be optimized alltogether with the other trainable parameters.Imitation Learning
. The behavior cloning is a straightforward approach in imitation learning. If we use the mean square error for the target function(loss function), it can be written as
(1) 
where denotes the reference action, the notation represent the expectation over trajectories that follow policy (the behavior policy). Usually the data collection and the policy updating are separate and independent in such supervised learning process, which leads to inconsistency between training and testing. The DAgger proposed in [15] substitutes the behavior policy with to alleviate the discrepancy. We omit the loss function, as it is similar to Equation. 1.
Policy Gradient. We use the similar formulations of policy gradients as Schulman et al. [20], where the target function is written as:
(2) 
There are different expressions for in Equation 2. gives the REINFORCE [23]. The ActorCritic method [24] uses onestep temporal difference error that is given by , where represents the discounted value function on policy , and denotes the discount factor. The GAE([20] is represented by .
The calculation of mentioned above requires onpolicy generation of trajectories. In case external intervention is introduced, the policy optimization in Equation 2 can not be directly applied. In the next subsection we reformulate the policy optimization paradigm with intervention.
3.2 Intervention Aided Reinforcement Learning
We assume that the expert in the IARL system is represented by , where is the possibility of intervening the system when the expert observes the agent being at state . Notice that both and is decided by the expert himself, which is not known in advance to the agent. The expert samples a decision according to at time , determining whether the agent is intervened eventually. We write the actual action as with . is regarded to be sampled from the mixed policy of the experts and the agent, which is denoted as and given by:
(3) 
Note that the behavior policy (the policy under which the data is collected) in IARL is , not . In order to approximate onpolicy, we optimize through the mixed policy instead of .
In order to prevent intervention, we also reshape the reward function in IARL. We treat intervention as failures, and reshape the reward by punishing the intervention with
(4) 
where is a predefined hyperparameter. Notice that all the collisions have been prevented by the expert intervention, thus the agent learns to avoid collisions only from avoiding the intervention. By replacing with and with , the derivative of policy loss in Equation 2 to is written as Equation 5. Note that any derivation of the reference policy vanishes, and we use to replace .
(5) 
We can see that and if , and if . can be interpreted as a “mask” that blocks the gradient to when . For those states that , the action is intervened by and the trajectory is not determined by , thus the gradient vanishes. However, as well as cannot be directly approximated, and thus is not known exactly. We turn to use a simple approximation of . The error of the approximation is small when is close to or ; it indeed may result in large errors in other cases, but these errors turns out to be acceptable in our experiments. We rewrite the loss of IARL as:
(6) 
In Equation 6 the summation is running over steps satisfying only, while the steps with are completely “masked”. Notice that the value function still backup through all states to give , thus Equation 6 tends to avoid the intervention beforehand. However, Equation 6 does not improve the performance of the policy once the intervention has taken place (e.g., when is close to 1). This is harmful to the robustness of policy , as is likely to malfunction in the states of “intervened zone”. To alleviate the defects, we introduce an additional imitation loss into Equation 6. The imitation loss matches the optimizing policy to the reference policy , which leads to:
(7) 
An intuitive explanation of Equation 7 is that the first term in the right side avoids intervention beforehand, and the second term imitates the reference policy in the “intervened zone”. The hyperparameter balances the importance of the two parts. The training process of IARL is presented in Algorithm 1. It can be easily extended to PPO, we leave those details in the supplements. We call Equation 6 IARL (No Imitation) and Equation 7 IARL (Imitation).
4 System Setup
In order to validate IARL, we build the UAVs and the control systems in both simulation and reality. The configurations in both environments are similar to each other. In this section we briefly introduce the architecture of the control system and the neural network structure of the decision model(the policy).
4.1 Control Architecture
The overall control system includes the decision model to be optimized, two assistant ProportionalIntegralDerivative (PID) controllers and an underlying double closed loop PID controller. The input to our system includes image flows captured by binocular stereo vision, supersonic sensors in its three directions (left, right and front) and Inertial Measurement Unit(IMU).
The command of pitch and roll angle of the UAV is decided by the policy model. We use two additional controller to deal with the yaw and the thrust, which keeps the drone at a fixed height and heading angle. The central controller sends the full attitude commands to the underlying control module, which controls to the attenuators of the motor. The remote controller is able to override those commands at any time if a switch is turned on by the human expert, which corresponds to the in Algorithm 1. A sketch of the controller architecture can be found in Figure 1. The output of the policy model is calculated at a frequency of 10Hz, and the underlying controllers exports at the frequency of 400Hz.
4.2 Model Structure
Our model does not use raw signals as the input directly. Instead, we introduce several preprocessing steps. The IMU and stereo cameras are combined and preprocessed to generate velocities and positions as well as the depth information. We use an opensource SDK ^{3}^{3}3The boteye SDK: https://github.com/baidu/boteye. We also calculate the velocity and the direction to the goal from raw signals and use them as the observation, details of the features used can be found in the supplements.
The policy network and value network
share the same model structure but have different output and independent parameters. For each frame of depth image we apply four convolutionpooling layers. The hidden representation is then concatenated with the other features. Traditional reactive controller such as APF typically uses only observation of one single frame. However, the real system is more close to a Partially Observable MDP due to the noisy sensor signals. It is found that multiple frames of observations are helpful to reducing the noise and making up the missing information in each single observation. We build a model that is similar to the architecture reported by Hausknecht et al.
[25]. The LSTM unit [26] is applied to encode the observations from last 2.5 seconds in order. We apply a tanh layer to restrict the output to , which corresponds to the pitch/roll angle of . The structure of the model is shown in Figure 2. We also investigated the performance of two other different models. We used a fixed dataset collected from the simulation using reference policy, which contains 1 million frames of training data and 50,000 frames of testing data. The mean square error(MSE) performance of different structures are compared: The single observation (2D convolution + MLP) gives MSE = ; Stacking multiple frames (3D convolution + MLP) gives MSE = ; Recurrent networks (2D convolution + LSTM) gives MSE = . The test result demonstrates the superiority of the proposed model structure.5 Experiments
5.1 Simulation Environments
Simulator. We use the Robot Operation System (ROS) and Gazebo 7 [27] for simulation. The drone is simulated using the opensource hectorquadrotor package on ROS [28] . Some of the kinetic hyperparameters are readjusted to match our drone in reality. An Intel Realsense sensor is attached to the simulated drone to represent the stereo vision. In the simulation, the velocity and position are directly acquired from the simulator instead of being calculated from the raw observation.
Environments. The simulation region is in scale. The drone is required to start from the start point of (m, m) and find its way to the goal on top of the region (m). The height and yaw angle of the UAV is kept unchanged. We manually designed three kinds of obstacles in Gazebo 7, including walls, walls, and buckets of diameter and height. We randomly scattered a certain amount of obstacles in the region, the scenarios without a valid inner path between the starting point and the goal is dropped. 100 different scenarios were finally generated with different random seeds. 90 of them were treated as training scenes, while the left 10 were the testing scenes.
Expert simulation. Training with real human operators in simulation is a relatively expensive choice. In order to further reduce the cost, we build a simulated expert, which gives and automatically. We use ground truth information provided by Gazebo 7 to simulate the expert, which is hidden from the agents. The details of experts simulation are left in the supplements. The simulated expert help to avoid around collision during IARL training. In order to make a fair comparison, we want the IARL agent to only learn from intervention. Thus we dropped the left collided episodes in the IARL training process.
Reward. The basic reward function for RL and IARL is set to , where is the projection of the velocity in the direction of the goal. The term of encourages the drone toward the desired goal. The term is to slow down the drone to avoid risky accelerations. The maximum reward shall be achieved if the drone moves at the speed of m/s towards the goal. The collision with any obstacle triggers the failure of an episode, which terminates the episode and gives additional punishment. If no collision is detected, the episode terminates when arriving the goal or reaching at most steps( seconds). In IARL, we use for the intervention punishment such that .
Evaluation. We perform two kinds of tests for each comparing method and each test scenario: The Normal Test evaluates by removing the external interference , is used as the criteria. The Intervention Aided Test keeps the external interference . is used as the criteria. Another criteria in Intervention Aided Test is called the intervention rate (IR), defined by
. This evaluation metric is similar to the disengagement rate evaluation in autonomous vehicle. All our evaluation results are averaged among the
test scenarios.5.2 Methods for comparison
SL. We collect 1 million frames under policy in 90 training scenarios in advance. The policy is optimized with Adam and Equation 1 for 50 epochs. The test is performed only once.
DAgger. The behavior policy is set to . The is recorded during the training process. For every 50,000 frames, the data is collected to aggregate the current dataset. The aggregated dataset is used to train the policy using Equation 1. The optimization includes 20 epochs of training (one iteration). After each iteration we perform tests using . We run this process until 1 million frames.
RL. The RL group uses PPO ([19]) with GAE ([20]), and use collision as its feedback. Every iteration contains 10,000 frames of data and 20 epochs of training. The test is performed after each training epoch. We ran the training process up to 1 million frames.
IARL. The policy is optimized using Algorithm 1, the training trajectories include no collision. Both IARL(No Imitation) and IARL(Imitation) are tested. The periods of training and testing is kept equal to the RL group.
5.3 Simulation Results and Discussions
We compare each of the four methods in both Normal Test and Intervention Aided Test. The performance against trained frames is plotted in Figure 3. Several remarks can be made from the results. The IARL(No Imitation) group shows completely different performance with different evaluation methods: In the Intervention Aided Test where the reward and the behavior policy of the training and the testing are coherent, it achieves acceptable performance in its rewards and IR; In Normal Test where the intervention is removed, the IARL(No Imitation) group collides frequently with obstacles and results in a low performance score. On the other side, the IARL(Imitation) surpasses all the other groups in each test, especially in the IR evaluation.
We count the average collision rate (the collided cases out of 10 test scenarios) over the last 5 evaluations in the Normal Test, Tab. 1 presents the result. The IARL(Imitation) reduces its collision to 0 steadily. While the other groups suffer from occasional collision. Notice that in the Intervention Aided Test the value of IR never dropped to 0, as we found that the intervention might overreact in some cases. In other words, the expert intervenes even when the drone could have avoided collision all by itself.
We also plot the trajectories of RL, IARL(Imitation) and DAgger in Figure 4. A close look at the differences between the IARL and the other two groups shows that the IARL agent is more likely to choose a conservative trajectory. In other words, it is more likely to keep enough distance away from the obstacles, while the RL one pursues more risky trajectories for higher rewards. Besides, we can see that the DAgger and the IARL agent successfully explore a path out of a blind alley(the trajectories in the middle figure), where the RL agent fails to escape(it stops, without any collision). Nevertheless, overstepping the local optimum by exploring around is nearly impossible for reactive controllers that relies on single frame observation only.
Method  SL  DAgger  RL  IARL(No Imitation)  IARL(Imitation) 
Average Collision  40%  24%  22.5%  42%  0% 
5.4 Experiments in Reality
Environments. The drone in real world is a X type quadrotor helicopter with 38cm size and 1.6kg weight(Figure 5). It is equipped with Nvidia Jetson TX2 as the onboard computer, stereo vision, supersonic sensors, IMU and barometer. All computations are completed on board. For simplicity, the environment is composed of two kinds of different obstacles, which are built from foam boards, including trigonometric and square pillars(Figure 5).
Startup Policy. In order to validate IARL in reality with more acceptable cost, we first collected 100,000 frames of expert manipulation in different cluttered scenarios. We then took advantage of the policy and the value network trained in simulation. We finetuned the policy on the labeled data for certain epochs. This model is further used as the startup policy for IARL training.
IARL Training. During the training process, a human operator keeps watching on the drone and intervening with remote controller when necessary based on his own judgment. The training environments are manually readjusted randomly after several passes. Each iteration(training round) includes 5000 frames of data(around 20 passes) and 20 epochs of training. Extra 2 different scenarios are built and used for test. Similar to the evaluation in the simulations, we apply the Intervention Aided Test in the two test scenarios every 20,000 frames, but the Normal Test is not available in reality due to the safety issue.
Results. The IR in each test round is plotted in Figure 6. At the same time, we plot the average pass time(defined as the total time consumed from the start point to the goal) and average intervention time(defined as the length of time that ). It is worth mentioning that the overall performance is not as good as the simulation(where IR dropped below ). There can be two reasons: Firstly, the signals acquired from real sensors are much more noisy, especially the depth image; Secondly, there are considerable delay between the observation and the decision making, which is mainly caused by the stereo matching and visual odometer. The delay can go up to 0.3s or 0.4s. Even with those disadvantages, we can see that our model learned to substantially reduce the IR and the intervention time.
6 Conclusion
In this paper, we propose the IARL framework that improves the policy by avoiding and imitating the external intervention at the same time. We successfully deploy our method to UAV platform to achieve mapless collisionfree navigation in the unstructured environments. The experimental results show that IARL achieves satisfying collision avoidance and low intervention rate, and ensures safety in the training process. By looking into the future of this technique, we anticipate that IARL will serve as an effective policy optimization metric in various high risk realworld scenarios.
References

Duan et al. [2016]
Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel.
Benchmarking deep reinforcement learning for continuous control.
In
International Conference on Machine Learning
, pages 1329–1338, 2016.  Kober et al. [2013] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. IJRR, 32(11):1238–1274, 2013.
 Levine et al. [2016] S. Levine, C. Finn, T. Darrell, and P. Abbeel. Endtoend training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016.
 Rusu et al. [2016] A. A. Rusu, M. Vecerik, T. Rothörl, N. Heess, R. Pascanu, and R. Hadsell. Simtoreal robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286, 2016.
 Zhu et al. [2017] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. FeiFei, and A. Farhadi. Targetdriven visual navigation in indoor scenes using deep reinforcement learning. In ICRA, pages 3357–3364, 2017.
 Saunders et al. [2018] W. Saunders, G. Sastry, A. Stuhlmueller, and O. Evans. Trial without error: Towards safe reinforcement learning via human intervention. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 2067–2069. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
 BoninFont et al. [2008] F. BoninFont, A. Ortiz, and G. Oliver. Visual navigation for mobile robots: A survey. Journal of intelligent and robotic systems, 53(3):263, 2008.
 Oleynikova et al. [2015] H. Oleynikova, D. Honegger, and M. Pollefeys. Reactive avoidance using embedded stereo vision for mav flight. In ICRA, pages 50–56, 2015.
 Syre Wiig et al. [2017] M. Syre Wiig, K. Y. Pettersen, and T. R. Krogstad. A reactive collision avoidance algorithm for vehicles with underactuated dynamics. In CDC, pages 1452–1459, 2017.
 Matveev et al. [2013] A. S. Matveev, M. C. Hoy, and A. V. Savkin. The problem of boundary following by a unicyclelike robot with rigidly mounted sensors. Robotics and Autonomous Systems, 61(3):312–327, 2013.
 Lacroix et al. [1998] S. Lacroix, S. Fleury, H. Haddad, M. Khatib, F. Ingrand, G. Bauzil, M. Herrb, C. Lemaire, and R. Chatila. Reactive navigation in outdoor environments. In ICRA, pages 1232–1237, 1998.
 Vadakkepat et al. [2000] P. Vadakkepat, K. C. Tan, and W. MingLiang. Evolutionary artificial potential fields and their application in real time robot path planning. In Evolutionary Computation, volume 1, pages 256–263, 2000.
 Liu et al. [2017] G.H. Liu, A. Siravuru, S. Prabhakar, M. Veloso, and G. Kantor. Learning endtoend multimodal sensor policies for autonomous navigation. arXiv:1705.10422, 2017.
 Bojarski et al. [2016] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. End to end learning for selfdriving cars. arXiv:1604.07316, 2016.
 Ross et al. [2011] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to noregret online learning. In AISTATS, pages 627–635, 2011.
 Ross et al. [2013] S. Ross, N. MelikBarkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert. Learning monocular reactive uav control in cluttered natural environments. In ICRA, pages 1765–1772, 2013.
 Lillicrap et al. [2015] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
 Schulman et al. [2015] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML, pages 1889–1897, 2015.
 Schulman et al. [2017] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv:1707.06347, 2017.
 Schulman et al. [2015] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. Highdimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015.
 Rajeswaran et al. [2017] A. Rajeswaran, V. Kumar, A. Gupta, J. Schulman, E. Todorov, and S. Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
 Levine and Koltun [2013] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning, pages 1–9, 2013.
 Williams [1992] R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Reinforcement Learning, pages 5–32. Springer, 1992.
 Sutton and Barto [1998] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
 Hausknecht and Stone [2015] M. Hausknecht and P. Stone. Deep recurrent qlearning for partially observable mdps. arXiv:1507.06527, 2015.
 Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Koenig and Howard [2004] N. Koenig and A. Howard. Design and use paradigms for gazebo, an opensource multirobot simulator. In IROS, volume 3, pages 2149–2154, 2004.
 Meyer et al. [2012] J. Meyer, A. Sendobry, S. Kohlbrecher, U. Klingauf, and O. von Stryk. Comprehensive simulation of quadrotor uavs using ros and gazebo. In SIMPAR, page to appear, 2012.
S7 Supplementary Materials
s7.1 The Value Function Approximation
In IARL, the approximation of value function is optimized by minimizing the mean square error loss in Equation 8.
(8) 
There are different estimation metrics for . For example, considering the IARL paradigm, represents the Monte Carlo(MC) estimation, and represents the temporal difference estimation, where is the parameters of the target network. In this work we use nstep return estimation shown in Equation 9.
(9) 
The difference between Equation 9 and the MC estimation is that we backup the value function of the last state , which is to avoid the possible bias brought by the finite horizon.
s7.2 Extending IARL(Imitation) to PPO
The policy loss of PPO based IARL is written as Equation 10. denotes the parameter from the last iteration. is dynamically controlled by another hyperparameter . For more details please refer to [19].
(10) 
s7.3 Features of Observation
At each time step, the policy and value networks take 25 frames of observation and encode the observations using LSTM unit. Each frame of observation is composed of the features listed in Table. S2
Description  Dimension 

Depth Images, acquired from stereo matching  
Velocities in UAV body coordinate system, acquired from Visual Odometer, discretized independently in x and y directions  
Distance reported by supersonic sensors, discretized  
Angle of direction to the goal, discretized 
s7.4 Details of The Expert Simulation
Simulating experts in Gazebo 7 includes and , which represent the expert’s policy and intervention probability respectively.
: For each new scenario, we discretized the map into grids. We restrict that the UAV moves towards the neighboring 8 grids. The cost of moving to the neighboring grid is defined as the distance to the center of the grid. We then mask all the grids that intersect with the obstacles, and ran Dijkstra’s algorithm on the discretized map to acquire the cost to the goal for each grid. During each time step of the training or testing process, starting from the current position, we plan a trajectory by setting magnitude of velocity to 0.20m/s and following the direction to the grid of lowest cost. We used the next three planned positions on the current planned trajectory as the target. We defined the loss as the mean square error between the predicted positions and the target positions. We applied the Model Predictive Control to give .
: We collect a small amount of data in advance to build the simulator. Driving the UAV with a RL pretrained policy(but not yet converged, i.e. an imperfect policy), we required the experts to keep UAV safe by intervening the UAV when necessary using keyboard in Gazebo7 simulation. We collect 50,000 frames of data. Each sample has the following features

The roll and pitch angle of the UAV

The velocity of the UAV in its body coordinate system

Top10 nearest obstacles, represented by its obstacle type, and the position of the point that is closest to the UAV. The position is calculated in the UAV body coordinate system.
Each sample is labeled with
. We train a Gradient Boosting Decision Trees(GBDT) model with crossentropy loss, which gives the
.s7.5 Hyperparameters
Hyperparameter  Value  Description 

0.96  Discount factor in RL, IARL  
0.98  Exponential decay weight in generalized advantage estimation(GAE, [20]) in RL and IARL  
Learning rate(policy)  1.0e4  Adam optimizer 
Learning rate(value)  1.0e3  Adam optimizer 
Minibatch size(value and policy)  32  Minibatch size in RL, IARL and SL 
Iteration size  1.0e3  Collect this many data before training in each iteration in RL and IARL 
Action repeat  4  Repeat each action selected by the agent this many times in RL and IARL 
0.003  in PPO([19]), for RL and IARL  
2.0  The ratio of imitation loss and policy loss in IARL  
1.8  The initial exploration noise in policy , is optimized together with during the training process in RL and IARL. 
Comments
There are no comments yet.