DeepAI AI Chat
Log In Sign Up

Structural Similarity for Improved Transfer in Reinforcement Learning

by   C. Chace Ashcraft, et al.
Johns Hopkins University Applied Physics Laboratory

Transfer learning is an increasingly common approach for developing performant RL agents. However, it is not well understood how to define the relationship between the source and target tasks, and how this relationship contributes to successful transfer. We present an algorithm called Structural Similarity for Two MDPS, or SS2, that calculates a state similarity measure for states in two finite MDPs based on previously developed bisimulation metrics, and show that the measure satisfies properties of a distance metric. Then, through empirical results with GridWorld navigation tasks, we provide evidence that the distance measure can be used to improve transfer performance for Q-Learning agents over previous implementations.


Transfer from Multiple MDPs

Transfer reinforcement learning (RL) methods leverage on the experience ...

A Taxonomy of Similarity Metrics for Markov Decision Processes

Although the notion of task similarity is potentially interesting in a w...

Self-Organizing Maps as a Storage and Transfer Mechanism in Reinforcement Learning

The idea of reusing information from previously learned tasks (source ta...

A partition-based similarity for classification distributions

Herein we define a measure of similarity between classification distribu...

Similarity metrics for Different Market Scenarios in Abides

Markov Decision Processes (MDPs) are an effective way to formally descri...

Transfer of Temporal Logic Formulas in Reinforcement Learning

Transferring high-level knowledge from a source task to a target task is...

Effective and General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping

In instruction conditioned navigation, agents interpret natural language...