Transferring Agent Behaviors from Videos via Motion GANs

11/21/2017
by   Ashley D. Edwards, et al.
0

A major bottleneck for developing general reinforcement learning agents is determining rewards that will yield desirable behaviors under various circumstances. We introduce a general mechanism for automatically specifying meaningful behaviors from raw pixels. In particular, we train a generative adversarial network to produce short sub-goals represented through motion templates. We demonstrate that this approach generates visually meaningful behaviors in unknown environments with novel agents and describe how these motions can be used to train reinforcement learning agents.

READ FULL TEXT

page 2

page 4

page 5

page 6

research
09/19/2018

Prosocial or Selfish? Agents with different behaviors for Contract Negotiation using Reinforcement Learning

We present an effective technique for training deep learning agents capa...
research
10/30/2019

Automatic Testing and Falsification with Dynamically Constrained Reinforcement Learning

We consider the problem of using reinforcement learning to train adversa...
research
05/23/2023

Video Prediction Models as Rewards for Reinforcement Learning

Specifying reward signals that allow agents to learn complex behaviors i...
research
05/25/2017

Cross-Domain Perceptual Reward Functions

In reinforcement learning, we often define goals by specifying rewards w...
research
02/13/2023

Guiding Pretraining in Reinforcement Learning with Large Language Models

Reinforcement learning algorithms typically struggle in the absence of a...
research
04/14/2021

A Novel Approach to Curiosity and Explainable Reinforcement Learning via Interpretable Sub-Goals

Two key challenges within Reinforcement Learning involve improving (a) a...
research
05/23/2018

Reinforcement Learning for Heterogeneous Teams with PALO Bounds

We introduce reinforcement learning for heterogeneous teams in which rew...

Please sign up or login with your details

Forgot password? Click here to reset