A double competitive strategy based learning automata algorithm

12/01/2017
by   Chong Di, et al.
0

Learning Automata (LA) are considered as one of the most powerful tools in the field of reinforcement learning. The family of estimator algorithms is proposed to improve the convergence rate of LA and has made great achievements. However, the estimators perform poorly on estimating the reward probabilities of actions in the initial stage of the learning process of LA. In this situation, a lot of rewards would be added to the probabilities of non-optimal actions. Thus, a large number of extra iterations are needed to compensate for these wrong rewards. In order to improve the speed of convergence, we propose a new P-model absorbing learning automaton by utilizing a double competitive strategy which is designed for updating the action probability vector. In this way, the wrong rewards can be corrected instantly. Hence, the proposed Double Competitive Algorithm overcomes the drawbacks of existing estimator algorithms. A refined analysis is presented to show the ϵ-optimality of the proposed scheme. The extensive experimental results in benchmark environments demonstrate that our proposed learning automata perform more efficiently than the most classic LA SE_RI and the current fastest LA DGCPA^*.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro