Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes

05/11/2010
by   Marek Petrik, et al.
0

Approximate dynamic programming has been used successfully in a large variety of domains, but it relies on a small set of provided approximation features to calculate solutions reliably. Large and rich sets of features can cause existing algorithms to overfit because of a limited number of samples. We address this shortcoming using L_1 regularization in approximate linear programming. Because the proposed method can automatically select the appropriate richness of features, its performance does not degrade with an increasing number of features. These results rely on new and stronger sampling bounds for regularized approximate linear programs. We also propose a computationally efficient homotopy method. The empirical evaluation of the approach shows that the proposed method performs well on simple MDPs and standard benchmark problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2023

Optimistic Planning by Regularized Dynamic Programming

We propose a new method for optimistic planning in infinite-horizon disc...
research
05/08/2012

Approximate Dynamic Programming By Minimizing Distributionally Robust Bounds

Approximate dynamic programming is a popular method for solving large Ma...
research
06/13/2012

Partitioned Linear Programming Approximations for MDPs

Approximate linear programming (ALP) is an efficient approach to solving...
research
10/16/2012

Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs

Recently, Petrik et al. demonstrated that L1Regularized Approximate Line...
research
07/04/2012

Approximate Linear Programming for First-order MDPs

We introduce a new approximate solution technique for first-order Markov...
research
06/26/2013

Scaling Up Robust MDPs by Reinforcement Learning

We consider large-scale Markov decision processes (MDPs) with parameter ...
research
02/08/2023

Learning How to Infer Partial MDPs for In-Context Adaptation and Exploration

To generalize across tasks, an agent should acquire knowledge from past ...

Please sign up or login with your details

Forgot password? Click here to reset