State representation learning with recurrent capsule networks

12/28/2018
by   Louis Annabi, et al.
0

Unsupervised learning of compact and relevant state representations has been proved very useful at solving complex reinforcement learning tasks. In this paper, we propose a recurrent capsule network that learns such representations by trying to predict the future observations in an agent's trajectory.

READ FULL TEXT
research
09/22/2022

Capsule Network based Contrastive Learning of Unsupervised Visual Representations

Capsule Networks have shown tremendous advancement in the past decade, o...
research
05/18/2018

Siamese Capsule Networks

Capsule Networks have shown encouraging results on defacto benchmark com...
research
05/09/2023

Towards the Characterization of Representations Learned via Capsule-based Network Architectures

Capsule Networks (CapsNets) have been re-introduced as a more compact an...
research
11/09/2020

Testbeds for Reinforcement Learning

We present three problems modeled after animal learning experiments desi...
research
05/18/2023

Deep Reinforcement Learning-Based Control for Stomach Coverage Scanning of Wireless Capsule Endoscopy

Due to its non-invasive and painless characteristics, wireless capsule e...
research
12/17/2019

Capsule Attention for Multimodal EEG and EOG Spatiotemporal Representation Learning with Application to Driver Vigilance Estimation

Driver vigilance estimation is an important task for transportation safe...
research
09/12/2022

β-CapsNet: Learning Disentangled Representation for CapsNet by Information Bottleneck

We present a framework for learning disentangled representation of CapsN...

Please sign up or login with your details

Forgot password? Click here to reset