Self-Organizing Maps for Storage and Transfer of Knowledge in Reinforcement Learning

The idea of reusing or transferring information from previously learned tasks (source tasks) for the learning of new tasks (target tasks) has the potential to significantly improve the sample efficiency of a reinforcement learning agent. In this work, we describe a novel approach for reusing previously acquired knowledge by using it to guide the exploration of an agent while it learns new tasks. In order to do so, we employ a variant of the growing self-organizing map algorithm, which is trained using a measure of similarity that is defined directly in the space of the vectorized representations of the value functions. In addition to enabling transfer across tasks, the resulting map is simultaneously used to enable the efficient storage of previously acquired task knowledge in an adaptive and scalable manner. We empirically validate our approach in a simulated navigation environment, and also demonstrate its utility through simple experiments using a mobile micro-robotics platform. In addition, we demonstrate the scalability of this approach, and analytically examine its relation to the proposed network growth mechanism. Further, we briefly discuss some of the possible improvements and extensions to this approach, as well as its relevance to real world scenarios in the context of continual learning.


page 20

page 27


Self-Organizing Maps as a Storage and Transfer Mechanism in Reinforcement Learning

The idea of reusing information from previously learned tasks (source ta...

Disentangling Transfer in Continual Reinforcement Learning

The ability of continual learning systems to transfer knowledge from pre...

Continual Learning via Online Leverage Score Sampling

In order to mimic the human ability of continual acquisition and transfe...

Identification and Off-Policy Learning of Multiple Objectives Using Adaptive Clustering

In this work, we present a methodology that enables an agent to make eff...

On the Effectiveness of Equivariant Regularization for Robust Online Continual Learning

Humans can learn incrementally, whereas neural networks forget previousl...

Representation Matters: Improving Perception and Exploration for Robotics

Projecting high-dimensional environment observations into lower-dimensio...

Adaptive Procedural Task Generation for Hard-Exploration Problems

We introduce Adaptive Procedural Task Generation (APT-Gen), an approach ...

Please sign up or login with your details

Forgot password? Click here to reset