Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

07/06/2019
by   Elit Cenk Alp, et al.
1

Flappy Bird, which has a very high popularity, has been trained in many algorithms. Some of these studies were trained from raw pixel values of game and some from specific attributes. In this study, the model was trained with raw game images, which had not been seen before. The trained model has learned as reinforcement when to make which decision. As an input to the model, the reward or penalty at the end of each step was returned and the training was completed. Flappy Bird game was trained with the Reinforcement Learning algorithm Deep Q-Network and Asynchronous Advantage Actor Critic (A3C) algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset