Natural Policy Gradients In Reinforcement Learning Explained

09/05/2022
by   W. J. A. van Heeswijk, et al.
0

Traditional policy gradient methods are fundamentally flawed. Natural gradients converge quicker and better, forming the foundation of contemporary Reinforcement Learning such as Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). This lecture note aims to clarify the intuition behind natural policy gradients, focusing on the thought process and the key mathematical constructs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro