Binge Watching: Scaling Affordance Learning from Sitcoms

04/09/2018
by   Xiaolong Wang, et al.
2

In recent years, there has been a renewed interest in jointly modeling perception and action. At the core of this investigation is the idea of modeling affordances(Affordances are opportunities of interaction in the scene. In other words, it represents what actions can the object be used for). However, when it comes to predicting affordances, even the state of the art approaches still do not use any ConvNets. Why is that? Unlike semantic or 3D tasks, there still does not exist any large-scale dataset for affordances. In this paper, we tackle the challenge of creating one of the biggest dataset for learning affordances. We use seven sitcoms to extract a diverse set of scenes and how actors interact with different objects in the scenes. Our dataset consists of more than 10K scenes and 28K ways humans can interact with these 10K images. We also propose a two-step approach to predict affordances in a new scene. In the first step, given a location in the scene we classify which of the 30 pose classes is the likely affordance pose. Given the pose class and the scene, we then use a Variational Autoencoder (VAE) to extract the scale and deformation of the pose. The VAE allows us to sample the distribution of possible poses at test time. Finally, we show the importance of large-scale data in learning a generalizable and robust model of affordances.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

page 8

research
12/21/2020

Populating 3D Scenes by Learning Human-Scene Interaction

Humans live within a 3D space and constantly interact with it to perform...
research
12/13/2021

Hallucinating Pose-Compatible Scenes

What does human pose tell us about a scene? We propose a task to answer ...
research
08/12/2020

Generating Person-Scene Interactions in 3D Scenes

High fidelity digital 3D environments have been proposed in recent years...
research
08/12/2021

Conditional Temporal Variational AutoEncoder for Action Video Prediction

To synthesize a realistic action sequence based on a single human image,...
research
10/24/2022

Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE

In many imaging modalities, objects of interest can occur in a variety o...
research
06/25/2016

An Uncertain Future: Forecasting from Static Images using Variational Autoencoders

In a given scene, humans can often easily predict a set of immediate fut...
research
07/10/2021

SynPick: A Dataset for Dynamic Bin Picking Scene Understanding

We present SynPick, a synthetic dataset for dynamic scene understanding ...

Please sign up or login with your details

Forgot password? Click here to reset