Log In Sign Up

Unified State Representation Learning under Data Augmentation

The capacity for rapid domain adaptation is important to increasing the applicability of reinforcement learning (RL) to real world problems. Generalization of RL agents is critical to success in the real world, yet zero-shot policy transfer is a challenging problem since even minor visual changes could make the trained agent completely fail in the new task. We propose USRA: Unified State Representation Learning under Data Augmentation, a representation learning framework that learns a latent unified state representation by performing data augmentations on its observations to improve its ability to generalize to unseen target domains. We showcase the success of our approach on the DeepMind Control Generalization Benchmark for the Walker environment and find that USRA achieves higher sample efficiency and 14.3 better domain adaptation performance compared to the best baseline results.


page 1

page 2

page 3

page 4


Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation

Despite the recent success of deep reinforcement learning (RL), domain a...

DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Domain adaptation is an important open problem in deep reinforcement lea...

SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies

Generalization has been a long-standing challenge for reinforcement lear...

Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL

A highly desirable property of a reinforcement learning (RL) agent – and...

Intervention Design for Effective Sim2Real Transfer

The goal of this work is to address the recent success of domain randomi...

Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring

Despite recent progress, learning new tasks through language instruction...

Denoised MDPs: Learning World Models Better Than the World Itself

The ability to separate signal from noise, and reason with clean abstrac...

I Introduction

Latent Unified State Representation [xing2021domain] (LUSR) is a representation learning technique used for zero-shot domain adaptation from a source task to related target tasks (i.e. same action space and similar transitions/rewards). In zero-shot learning, the agent is only trained on the source domain; while restrictive, this approach is applicable to real life problems such as autonomous driving under different weather conditions.

LUSR trains a cycle-consistent Variational Autoencoder (VAE)

[DBLP:journals/corr/abs-1804-10469] that outputs a domain general representation, with features like the shape of the road, and a domain specific representation, with features like the background color, by learning from unlabeled images taken from different tasks. The encoder is then frozen and the RL agent is trained using the learned domain general representations.

We hypothesize that we can improve LUSR by fine-tuning the encoder during the training of the RL agent rather than freezing it. We propose USRA, Unified State Representation Learning under Data Augmentation, which is a technique that trains an encoder to learn a generalizable state representation using a pretraining phase followed by finetuning during policy learning, as shown in Figure 1. We challenge the encoder to learn online from observations collected by the policy to encourage a more useful image embedding than one only trained on random observations.

Fig. 1: Overview of the architecture of USRA during the pretraining and finetuning phase. USRA leverages image augmentations to train the encoder. is a concatenation operation for input frames.

Our intuition is that seeing more varied (rather than random) observations from different areas of the state space that are explored during policy training can improve the encoder’s learned representations. This should allow the representation to incorporate information that is relevant to achieving a higher return rather than just for image reconstruction. To enable fine-tuning without loss of generality, we add an auxiliary objective during agent training called SVEA (Stabilized Q-Value Estimation under Augmentation)

[NEURIPS2021_1e0f65eb]. This method is a domain generalization technique that adds a consistency loss term between the estimated Q-value of the latent representation of an augmented version of the frame and the target Q-value of the original frame. The augmented frames are used only to compute the SVEA loss while policy training occurs with the original non-augmented frames.

We evaluate USRA on a domain generalization benchmark called the DeepMind Control Generalization Benchmark (DMControl-GB) [hansen2021generalization]. This benchmark is based on continuous control tasks from the MuJoCo physics engine [todorov2012mujoco] and are perturbed with random colors or video backgrounds during evaluation time. We compare our method on the Walker task to the baselines of LUSR and SVEA, and find USRA outperforms all methods with better sample efficiency, asymptotic performance, and generalization success.

Our contribution in this paper is three-fold:

  1. We propose USRA, a unified state representation learning technique that decomposes an image into a domain specific and domain general embedding and improves the encoder through an auxiliary Q-value estimation objective on augmented images.

  2. We find that USRA has better sample efficiency than either of the baselines LUSR or SVEA.

  3. USRA has a better generalization capacity than best baselines as it achieves 22.6% higher returns on an unseen task that randomly varies the background video of Walker and 21.2% on a harder unseen task that overlays a video on the entire environment.

Ii Related Work

Self-Supervised reinforcement learning has been shown to improve the data efficiency of reinforcement learning, particularly in visual domains [DBLP:journals/corr/abs-2004-04136, DBLP:journals/corr/abs-2004-13649, DBLP:journals/corr/abs-2007-05929, DBLP:journals/corr/abs-2106-04152]. Some techniques that involve training encoders include applying auxiliary losses on data augmented to input frames to realize a consistent latent representation [DBLP:journals/corr/abs-2004-04136, DBLP:journals/corr/abs-2004-13649]. Other techniques involve modeling a forward and reverse dynamics model to ensure the latent representation encodes temporal information [DBLP:journals/corr/abs-2007-05929, DBLP:journals/corr/abs-2106-04152]. However, these approaches rely on the availability of multiple source domains for training and the complexity of this approach scales with the number of variations. Instead of learning a policy with generalization capability directly, our work (USRA) focuses on the generalization of state representations.

Iii Method

Iii-a Markov Decision Process (MDP)

A Markov Decision Process

is described as a 6-tuple . and represent the state and the action space, respectively. denotes the reward function, meaning the environment provides reward at state .

is the probability of transitioning into state

after taking action in state . is the temporal discount factor, controlling the trade-off between instantaneous reward and future rewards. denotes the initial state probability and policy is the probability of choosing an action given a state.

Iii-B Latent Unified State Representation (LUSR)

The LUSR method transforms the state space of an MDP from an agent’s raw observation state space to a latent state space through the function . LUSR decomposes the latent state space into disjoint domain-specific and domain-general features ( domain specific, = domain general). Intuitively, a domain-general feature is one that is useful across similar domains (like the agent’s position on the screen), while a domain-specific feature is particular to one domain (like the background color).

LUSR employs a Cycle-Consistent VAE [DBLP:journals/corr/abs-1804-10469] to disentangle the domain-general and domain-specific features. In the forward cycle of the Cycle-Consistent VAE, for two observation states from the same domain, and , LUSR swaps the domain-specific embeddings and reconstructs the image using the decoder such that and . The loss, with encoder and decoder described in Equation 1, ensures that the domain-general encoding contains sufficient information to reconstruct the original input observation.


In the reverse cycle of the VAE, a randomly sampled is transformed with two domain-specific embeddings to encourage the encoder to recover the latent domain-general embedding and . The loss, described in Equation 2 seeks to enforce to encourage a domain-general embedding that is invariant across domains.


Iii-C Stabilized Q-Value Estimation under Augmentation (SVEA)

SVEA estimates the function of the MDP using an encoder, , where the predicted Q-value is defined as . The target state-action value function is where is the exponential moving average of defined in Equation 3:


for iteration step and momentum coefficient . SVEA performs stable Q-value estimation by updating the target state-action value function according to a temporal difference objective defined in Equation 4.


SVEA leverages a collection of random image augmentations to transform an observation . The original images and the augmented images are used in the -value estimation loss to encourage the estimated -value of both types of images to align with the target -value as described in Equation 5.


Iii-D Unified State Representation Learning under Augmentation (USRA)

USRA learns a unified state representation from the raw observation space to a latent embedding space . The learned encoder identifies a unified state representation through a projection which composes to identify a domain specific and domain general encoding . The auxiliary loss in -value estimation leverages a different projection head to obtain the -value estimate . These two networks are trained simultaneously during the first phase and then is fine-tuned during the second phase .

During a initial pre-training phase , state-action pairs collected using a random policy, , USRA leverages a non-negative linear combination of the Cycle-Consistency loss as well as -value estimation objective as shown in Equation 6,


with weights and on each loss. Then during the agent learning phase , USRA will enforce the temporal difference loss on augmented and non-augmented states collected from the policy during training .

Train Color Color Video Video
(Easy) (Hard) (Easy) (Hard)
USRA 949 949 948 862 245
SVEA 892 888 871 703 202
LUSR 374 273 150 165 43
TABLE I: Comparison of adaptation performance on Walker domains.

Data collection is an important concern for the training of the Cycle-Consistent VAE, since it requires gathering informative observations from multiple domains without a trained policy. LUSR required a large dataset of observations collected offline, randomly sampled from the source domain and a subset of the target domains (called the seen target domains). USRA overcomes this limitation by leveraging just observations from the source domain and applies augmentations on those observations to imitate the concept of target domains without needing additional samples from such domains.

Iv Results

We design experiments to validate that the proposed framework, USRA, can more efficiently learn a well performing policy generalize successfully to unseen target domains. Particularly, our goal is to investigate the following questions:

  1. Can USRA learn a better performing policy with fewer samples compared to baseline approaches?

  2. Does USRA generalize better under challenging distribution shifts than baseline approaches?

We will be evaluating USRA with these questions in mind on the Walker environment from the DMControl Generalization Benchmark. The goal for the Walker task is for a 2D Bipedal Humanoid to move to forward as quickly as possible by learning a stable and effective gait. We trivially set the value for the USRA loss. The augmentation that USRA uses in order to modify observations as part of the -value auxiliary loss is a random convolution on each observation such that . The number of frames used for pre-training of the encoder is 1000 frames collected from a random policy with random initialization. The learning rate for the encoder, policy, and all projection networks is 0.001. The batch size for the LUSR loss is 16 and for SVEA fine-tuning is 128.

Iv-a Comparison of Adaptation Algorithms

We compare USRA with the baseline methods LUSR and SVEA. We train each method for 2,000 episodes with an episode length of 1,000. We take the best performing policy learned from each method and find in Table I that USRA outperforms both baselines.

We find that after training, USRA learns a policy that has 6.3% higher returns in the source domain compared to the next best method, SVEA. More importantly, USRA outperforms the other baselines in the domain adaptation task on the unseen target domains. In the Color easy domain, where the background and platform color are changed, USRA performs 6.9% better than SVEA. In the Color hard domain, where additionally the Walker agent’s color is modified, USRA achieves 8.8% better generalization performance. With in the Video easy domain, where a random video is played in place of a static background, USRA learns an agent that 22.6% better returns. For the Video hard domain, where the platform is also removed and a video is overlayed on the entire backdrop, USRA outperforms SVEA by 21.2%.

The training and evaluation curves of USRA and the baselines are shown in Figure 2.

Fig. 2: Training curve of average train and color (hard) eval reward for LUSR, SVEA, and USRA.

For 500 episodes, USRA has similar sample efficiency as the best baselines, SVEA, and for 1000 episodes, USRA has better sample efficiency. The asymptotic performance of USRA in both the training and evaluation domains exceeds that of the baselines suggesting that USRA learns an encoder that facilitates the rapid training of a high-performing policy.

Iv-B USRA Ablations

We have ablated two different design choices for USRA. The first ablation is of the type of augmentation that USRA uses in the encoder pretraining phase to create different domains. We tried two different types of augmentations: random convolution and color jitter. Our random convolution augmentations create a 3x3 filter with random weights and apply it to an image, while our color jitter augmentations significantly vary the hue and slightly vary the brightness, contrast, and saturation of an image. As can be seen in 3, random convolutions (RandConv) lead to significantly higher sample efficiency and asymptotic performance. We hypothesize that this could be due to random convolution being a much stronger augmentation than color jittering. Having an encoder that has learned to extract domain-general features from more strongly augmented frames could allow it to adapt to a wider range of domains during the fine-tuning phase.

The second ablation is whether to lower the learning rate of the encoder by 10x after the pretraining step. It is common practice to lower the learning rate of the earlier layers when finetuning a pretrained model to prevent catastrophic forgetting. In Figure 3, static lr refers to the uniform learning rate case while differential lr refers to the case where the encoder has a lower learning rate after pretraining. We found that lowering the learning rate of the encoder significantly improved sample efficiency and the final reward values for the random convolution variant, but had no measurable effect on the color jitter variant. This could be because the pretrained color jitter encoder is not significantly more useful than a random initialization, so catastrophic forgetting is not an issue for it. On the other hand, the pretrained random convolution encoder seems to have learned something more useful using the same reasoning.

We picked the most successful configuration, USRA with random convolutions and differential learning rate, to compare to the baselines. This is the version of USRA that we are referring to whenever we mention USRA without qualification.

Fig. 3: Training curve of average train and color (hard) eval reward for different variations of USRA.

Iv-C Analysis

Our proposed method, USRA, accomplishes both goals we set out to investigate in our DMControl Generalization Walker experiments. USRA is able to demonstrate better sample efficiency after 500 episodes compared to SVEA. This is because USRA is able to leverage pretraining for the encoder, and thus can learn to identify domain general and domain specific features of observations. Then, during the finetuning stage, it can incorporate new information from policy rollouts to improve the learned representations under the shift of state marginal distribution when the policy is exploring.

Not only does it show better sample efficiency, but USRA further exceeds expectations by demonstrating better asymptotic performance on the training and evaluation domains compared to the baseline approaches. LUSR shows poor asymptotic performance since its training dataset is limited to observations that are seen by a random policy. Therefore, much of the state space is unseen after LUSR freezes the encoder and performs policy training. On the other hand, SVEA lacks a cycle-consistency loss, so it does not regularize the mapping from the observation to the learned embeddings such that there exists a reverse transformation that can reconstruct the original image. USRA resolves both of these issues by learning a representation that is transitive while also finetuning that representation along unseen ranges of state space to maximize performance.

USRA also accomplishes the second challenge of domain adaptation by successfully generalizing under challenging distribution shifts. This suggests that the LUSR objective (in USRA’s pretraining) of identifying domain-general and domain-specific embeddings improves the adaptation capacity, since the encoder is encouraged to identify meaningful features and disregard domain specific features such as the platform coloring or the background.

V Conclusion and Future Work

We present USRA, Unified State Represention under Augmentation, which successfully learns generalizable state representations through encoder pretraining and finetuning using data augmentations. USRA demonstrates higher sample efficiency than the baseline methods and succeeds at the problem of zero-shot domain adaptation on unseen domains. USRA builds upon the previous technique, LUSR, which enables zero-shot adaptation by training a cycle-consistent VAE on random observations from the source and seen target domains, using the learned encoder as a latent representation for RL training on the source domain. USRA leverages the stable -value estimation technique presented by SVEA to fine-tune the learned representations while exploring new parts of the state space during agent training.

Since our method can perform zero shot transfer, for future work we want to study how well USRA performs in out-of-distribution domains, where the action space may be different. We can use our learned encoder for few shot training for a new policy . Additionally, this method could potentially be effective in few shot learning for sim-to-real transfer. The recovered encoder can identify robust and generalizable representations that can adapt to the domain shift seen in real world robot scenarios. The advantage of this approach is that it will only require a few real-world images to be able to be able to deploy on a robot.

We also want to study how stronger augmentations could improve the learned representation. Currently, USRA only uses either color jitter or random convolutions. Vision Transformers have achieved impressive results in downstream tasks in computer vision, and are commonly used with very strong augmentations like CutMix and MixUp, could allow us to use even stronger augmentations than random convolution and and thus encourage better generalization in the learned representations.

Domain generalization is another way of training an agent to be robust to minor domain variations. It would be useful to compare these results to a baseline of domain generalization, as well as investigate whether USRA could be combined with domain randomization in some form. Perhaps USRA could rapidly train a capable agent, then domain randomization could gradually be introduced in a curriculum learning fashion to increase the asymptotic performance on large domain shifts (like the video background domains).