Imitation Learning for Fashion Style Based on Hierarchical Multimodal Representation

04/13/2020
by   Shizhu Liu, et al.
28

Fashion is a complex social phenomenon. People follow fashion styles from demonstrations by experts or fashion icons. However, for machine agent, learning to imitate fashion experts from demonstrations can be challenging, especially for complex styles in environments with high-dimensional, multimodal observations. Most existing research regarding fashion outfit composition utilizes supervised learning methods to mimic the behaviors of style icons. These methods suffer from distribution shift: because the agent greedily imitates some given outfit demonstrations, it can drift away from one style to another styles given subtle differences. In this work, we propose an adversarial inverse reinforcement learning formulation to recover reward functions based on hierarchical multimodal representation (HM-AIRL) during the imitation process. The hierarchical joint representation can more comprehensively model the expert composited outfit demonstrations to recover the reward function. We demonstrate that the proposed HM-AIRL model is able to recover reward functions that are robust to changes in multimodal observations, enabling us to learn policies under significant variation between different styles.

READ FULL TEXT

page 7

page 8

research
06/02/2023

PAGAR: Imitation Learning with Protagonist Antagonist Guided Adversarial Reward

Imitation learning (IL) algorithms often rely on inverse reinforcement l...
research
10/05/2022

Hierarchical Adversarial Inverse Reinforcement Learning

Hierarchical Imitation Learning (HIL) has been proposed to recover highl...
research
10/02/2019

Learning Calibratable Policies using Programmatic Style-Consistency

We study the important and challenging problem of controllable generatio...
research
06/02/2022

Learning Soft Constraints From Constrained Expert Demonstrations

Inverse reinforcement learning (IRL) methods assume that the expert data...
research
07/16/2021

Visual Adversarial Imitation Learning using Variational Models

Reward function specification, which requires considerable human effort ...
research
06/23/2022

Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

Learning agile skills is one of the main challenges in robotics. To this...
research
03/28/2022

Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions

Training a high-dimensional simulated agent with an under-specified rewa...

Please sign up or login with your details

Forgot password? Click here to reset