HoME: a Household Multimodal Environment

11/29/2017
by   Simon Brodeur, et al.
0

We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.

READ FULL TEXT

page 2

page 3

research
08/26/2022

CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning

We propose a multimodal (vision-and-language) benchmark for cooperative ...
research
03/19/2020

SAPIEN: A SimulAted Part-based Interactive ENvironment

Building home assistant robots has long been a pursuit for vision and ro...
research
06/01/2022

A modular architecture for creating multimodal agents

The paper describes a flexible and modular platform to create multimodal...
research
07/12/2020

OtoWorld: Towards Learning to Separate by Learning to Move

We present OtoWorld, an interactive environment in which agents must lea...
research
12/05/2020

iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes

We present iGibson, a novel simulation environment to develop robotic so...
research
04/08/2022

Do People Trust Robots that Learn in the Home?

It is not scalable for assistive robotics to have all functionalities pr...
research
01/24/2022

Learning to Act with Affordance-Aware Multimodal Neural SLAM

Recent years have witnessed an emerging paradigm shift toward embodied a...

Please sign up or login with your details

Forgot password? Click here to reset