Hilbert Space Embeddings of POMDPs

10/16/2012
by   Yu Nishiyama, et al.
0

A nonparametric approach for policy learning for POMDPs is proposed. The approach represents distributions over the states, observations, and actions as embeddings in feature spaces, which are reproducing kernel Hilbert spaces. Distributions over states given the observations are obtained by applying the kernel Bayes' rule to these distribution embeddings. Policies and value functions are defined on the feature space over states, which leads to a feature space expression for the Bellman equation. Value iteration may then be used to estimate the optimal value function and associated policy. Experimental results confirm that the correct policy is learned using the feature space representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2013

Hilbert Space Embeddings of Predictive State Representations

Predictive State Representations (PSRs) are an expressive class of model...
research
07/04/2015

Remarks on kernel Bayes' rule

Kernel Bayes' rule has been proposed as a nonparametric kernel-based met...
research
09/14/2023

Rates of Convergence in Certain Native Spaces of Approximations used in Reinforcement Learning

This paper studies convergence rates for some value function approximati...
research
07/24/2018

Singular Value Decomposition of Operators on Reproducing Kernel Hilbert Spaces

Reproducing kernel Hilbert spaces (RKHSs) play an important role in many...
research
11/23/2020

Discovering Causal Structure with Reproducing-Kernel Hilbert Space ε-Machines

We merge computational mechanics' definition of causal states (predictiv...
research
06/09/2015

Deep SimNets

We present a deep layered architecture that generalizes convolutional ne...
research
06/02/2019

Nonparametric Functional Approximation with Delaunay Triangulation

We propose a differentiable nonparametric algorithm, the Delaunay triang...

Please sign up or login with your details

Forgot password? Click here to reset