DeepAI AI Chat
Log In Sign Up

Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes

by   Marek Petrik, et al.

Approximate dynamic programming has been used successfully in a large variety of domains, but it relies on a small set of provided approximation features to calculate solutions reliably. Large and rich sets of features can cause existing algorithms to overfit because of a limited number of samples. We address this shortcoming using L_1 regularization in approximate linear programming. Because the proposed method can automatically select the appropriate richness of features, its performance does not degrade with an increasing number of features. These results rely on new and stronger sampling bounds for regularized approximate linear programs. We also propose a computationally efficient homotopy method. The empirical evaluation of the approach shows that the proposed method performs well on simple MDPs and standard benchmark problems.


page 1

page 2

page 3

page 4


Optimistic Planning by Regularized Dynamic Programming

We propose a new method for optimistic planning in infinite-horizon disc...

Approximate Dynamic Programming By Minimizing Distributionally Robust Bounds

Approximate dynamic programming is a popular method for solving large Ma...

Partitioned Linear Programming Approximations for MDPs

Approximate linear programming (ALP) is an efficient approach to solving...

Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs

Recently, Petrik et al. demonstrated that L1Regularized Approximate Line...

Efficient Solution Algorithms for Factored MDPs

This paper addresses the problem of planning under uncertainty in large ...

Approximate Linear Programming for First-order MDPs

We introduce a new approximate solution technique for first-order Markov...

Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs (Extended Version)

Many exact and approximate solution methods for Markov Decision Processe...