DeepAI AI Chat
Log In Sign Up

Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs

by   Gavin Taylor, et al.

Recently, Petrik et al. demonstrated that L1Regularized Approximate Linear Programming (RALP) could produce value functions and policies which compared favorably to established linear value function approximation techniques like LSPI. RALP's success primarily stems from the ability to solve the feature selection and value function approximation steps simultaneously. RALP's performance guarantees become looser if sampled next states are used. For very noisy domains, RALP requires an accurate model rather than samples, which can be unrealistic in some practical scenarios. In this paper, we demonstrate this weakness, and then introduce Locally Smoothed L1-Regularized Approximate Linear Programming (LS-RALP). We demonstrate that LS-RALP mitigates inaccuracies stemming from noise even without an accurate model. We show that, given some smoothness assumptions, as the number of samples increases, error from noise approaches zero, and provide experimental examples of LS-RALP's success on common reinforcement learning benchmark problems.


page 1

page 2

page 3

page 4


Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes

Approximate dynamic programming has been used successfully in a large va...

A Dantzig Selector Approach to Temporal Difference Learning

LSTD is a popular algorithm for value function approximation. Whenever t...

Adaptive Online Value Function Approximation with Wavelets

Using function approximation to represent a value function is necessary ...

A Geometric Perspective on Optimal Representations for Reinforcement Learning

This paper proposes a new approach to representation learning based on g...

UTA-poly and UTA-splines: additive value functions with polynomial marginals

Additive utility function models are widely used in multiple criteria de...