Reinforcement Learning with Goal-Distance Gradient
Reinforcement learning usually uses the feedback rewards of environmental to train agents. But the rewards in the actual environment are sparse, and even some environments will not rewards. Most of the current methods are difficult to get a good performance in a sparse reward environment. For environments without feedback rewards, a reward must be artificially defined. We present a method that does not rely on environmental rewards to solve the problem of sparse rewards. At the same time, the above two problems are solved, and it can be applied to more complicated environments and real-world environments. We used the number of steps transferred between states as the distance to replace the rewards of environmental. In order to solve the problem caused by the long distance between the start and the goal in a more complicated environment, we add bridge points to our method to establish a connection between the start and the goal. Experiments show that our method can be applied to more environments where distance cannot be estimated in advance.
READ FULL TEXT