Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs

10/16/2012
by   Gavin Taylor, et al.
0

Recently, Petrik et al. demonstrated that L1Regularized Approximate Linear Programming (RALP) could produce value functions and policies which compared favorably to established linear value function approximation techniques like LSPI. RALP's success primarily stems from the ability to solve the feature selection and value function approximation steps simultaneously. RALP's performance guarantees become looser if sampled next states are used. For very noisy domains, RALP requires an accurate model rather than samples, which can be unrealistic in some practical scenarios. In this paper, we demonstrate this weakness, and then introduce Locally Smoothed L1-Regularized Approximate Linear Programming (LS-RALP). We demonstrate that LS-RALP mitigates inaccuracies stemming from noise even without an accurate model. We show that, given some smoothness assumptions, as the number of samples increases, error from noise approaches zero, and provide experimental examples of LS-RALP's success on common reinforcement learning benchmark problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2014

An Analysis of State-Relevance Weights and Sampling Distributions on L1-Regularized Approximate Linear Programming Approximation Accuracy

Recent interest in the use of L_1 regularization in the use of value fun...
research
09/28/2021

The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation

When the sizes of the state and action spaces are large, solving MDPs ca...
research
05/11/2010

Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes

Approximate dynamic programming has been used successfully in a large va...
research
06/27/2012

A Dantzig Selector Approach to Temporal Difference Learning

LSTD is a popular algorithm for value function approximation. Whenever t...
research
04/22/2022

Adaptive Online Value Function Approximation with Wavelets

Using function approximation to represent a value function is necessary ...
research
01/31/2019

A Geometric Perspective on Optimal Representations for Reinforcement Learning

This paper proposes a new approach to representation learning based on g...
research
03/05/2016

UTA-poly and UTA-splines: additive value functions with polynomial marginals

Additive utility function models are widely used in multiple criteria de...

Please sign up or login with your details

Forgot password? Click here to reset