DeepAI AI Chat
Log In Sign Up

MoDA: Map style transfer for self-supervised Domain Adaptation of embodied agents

11/29/2022
by   Eun Sun Lee, et al.
0

We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.

READ FULL TEXT
10/14/2021

Self-Supervised Domain Adaptation for Visual Navigation with Global Map Consistency

We propose a light-weight, self-supervised adaptation for a visual navig...
01/04/2017

Demystifying Neural Style Transfer

Neural Style Transfer has recently demonstrated very exciting results wh...
05/12/2020

Planning to Explore via Self-Supervised World Models

Reinforcement learning allows solving complex tasks, however, the learni...
07/13/2021

Teaching Agents how to Map: Spatial Reasoning for Multi-Object Navigation

In the context of visual navigation, the capacity to map a novel environ...
04/30/2021

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation

We propose an approach to domain adaptation for semantic segmentation th...
02/12/2023

Policy-Induced Self-Supervision Improves Representation Finetuning in Visual RL

We study how to transfer representations pretrained on source tasks to t...
10/06/2021

Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer

Expert-layman text style transfer technologies have the potential to imp...