Deep-Reinforcement-Learning-Based Semantic Navigation of Mobile Robots in Dynamic Environments
Mobile robots have gained increased importance within industrial tasks such as commissioning, delivery or operation in hazardous environments. The ability to autonomously navigate safely especially within dynamic environments, is paramount in industrial mobile robotics. Current navigation methods depend on preexisting static maps and are error-prone in dynamic environments. Furthermore, for safety reasons, they often rely on hand-crafted safety guidelines, which makes the system less flexible and slow. Visual based navigation and high level semantics bear the potential to enhance the safety of path planing by creating links the agent can reason about for a more flexible navigation. On this account, we propose a reinforcement learning based local navigation system which learns navigation behavior based solely on visual observations to cope with highly dynamic environments. Therefore, we develop a simple yet efficient simulator - ARENA2D - which is able to generate highly randomized training environments and provide semantic information to train our agent. We demonstrate enhanced results in terms of safety and robustness over a traditional baseline approach based on the dynamic window approach.
READ FULL TEXT