Learning to Win, Lose and Cooperate through Reward Signal Evolution
Solving a reinforcement learning problem typically involves correctly prespecifying the reward signal from which the algorithm learns. Here, we approach the problem of reward signal design by using an evolutionary approach to perform a search on the space of all possible reward signals. We introduce a general framework for optimizing N goals given n reward signals. Through experiments we demonstrate that such an approach allows agents to learn high-level goals - such as winning, losing and cooperating - from scratch without prespecified reward signals in the game of Pong. Some of the solutions found by the algorithm are surprising, in the sense that they would probably not have been chosen by a person trying to hand-code a given behaviour through a specific reward signal. Furthermore, it seems that the proposed approach may also benefit from higher stability of the training performance when compared with the typical score-based reward signals.
READ FULL TEXT