Dynamic Control Allocation between Onboard and Delayed Remote Control for Unmanned Aircraft System Detect-and-Avoid
This paper develops and evaluates the performance of an allocation agent to be potentially integrated into the onboard Detect and Avoid (DAA) computer of an Unmanned Aircraft System (UAS). We consider a UAS that can be fully controlled by the onboard DAA system and by a remote human pilot. With a communication channel prone to latency, we consider a mixed initiative interaction environment, where the control authority of the UAS is dynamically allocated by the allocation agent. In an encounter with a dynamic intruder, the probability of collision may increase in the absence of pilot commands in the presence of latency. Moreover, a delayed pilot command may not result in safe resolution of the current scenario and need to be improvised. We design an optimization algorithm to reduce collision risk and refine delayed pilot commands. Towards this end, a Markov Decision Process (MDP)and its solution are employed to create a wait time map. The map consists of estimated times that the UAS can wait for the remote pilot commands at each state. A command blending algorithm is designed to select an avoidance maneuver that prioritizes the pilot intention extracted from the pilot commands. The wait time map and the command blending algorithm are implemented and integrated into a closed-loop simulator. We conduct ten thousands fast-time Monte Carlo simulations and compare the performance of the integrated setup with a standalone DAA setup. The simulation results show that the allocation agent enables the UAS to wait without inducing any near mid air collision (NMAC) and severe loss of well clear (LoWC) while positively improve pilot involvement in the encounter resolution.
READ FULL TEXT