Improved Activity Forecasting for Generating Trajectories

12/12/2019
by   Daisuke Ogawa, et al.
0

An efficient inverse reinforcement learning for generating trajectories is proposed based of 2D and 3D activity forecasting. We modify reward function with L_p norm and propose convolution into value iteration steps, which is called convolutional value iteration. Experimental results with seabird trajectories (43 for training and 10 for test), our method is best in terms of MHD error and performs fastest. Generated trajectories for interpolating missing parts of trajectories look much similar to real seabird trajectories than those by the previous works.

READ FULL TEXT

page 4

page 5

research
11/16/2019

Inverse Reinforcement Learning with Missing Data

We consider the problem of recovering an expert's reward function with i...
research
10/21/2021

Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain

We propose an approach for inverse reinforcement learning from hetero-do...
research
05/25/2021

Trajectory Modeling via Random Utility Inverse Reinforcement Learning

We consider the problem of modeling trajectories of drivers in a road ne...
research
05/12/2021

Acting upon Imagination: when to trust imagined trajectories in model based reinforcement learning

Model based reinforcement learning (MBRL) uses an imperfect model of the...
research
12/15/2017

Inverse Reinforce Learning with Nonparametric Behavior Clustering

Inverse Reinforcement Learning (IRL) is the task of learning a single re...
research
03/18/2019

Lorenz Trajectories Prediction: Travel Through Time

In this article the Lorenz dynamical system is revived and revisited and...
research
10/15/2019

SafeCritic: Collision-Aware Trajectory Prediction

Navigating complex urban environments safely is a key to realize fully a...

Please sign up or login with your details

Forgot password? Click here to reset