Improved Activity Forecasting for Generating Trajectories

12/12/2019
by   Daisuke Ogawa, et al.
0

An efficient inverse reinforcement learning for generating trajectories is proposed based of 2D and 3D activity forecasting. We modify reward function with L_p norm and propose convolution into value iteration steps, which is called convolutional value iteration. Experimental results with seabird trajectories (43 for training and 10 for test), our method is best in terms of MHD error and performs fastest. Generated trajectories for interpolating missing parts of trajectories look much similar to real seabird trajectories than those by the previous works.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset