Generalization in Transfer Learning

09/03/2019
by   Suzan Ece Ada, et al.
39

Agents trained with deep reinforcement learning algorithms are capable of performing highly complex tasks including locomotion in continuous environments. In order to attain a human-level performance, the next step of research should be to investigate the ability to transfer the learning acquired in one task to a different set of tasks. Concerns on generalization and overfitting in deep reinforcement learning are not usually addressed in current transfer learning research. This issue results in underperforming benchmarks and inaccurate algorithm comparisons due to rudimentary assessments. In this study, we primarily propose regularization techniques in deep reinforcement learning for continuous control through the application of sample elimination and early stopping. First, the importance of the inclusion of training iteration to the hyperparameters in deep transfer learning problems will be emphasized. Because source task performance is not indicative of the generalization capacity of the algorithm, we start by proposing various transfer learning evaluation methods that acknowledge the training iteration as a hyperparameter. In line with this, we introduce an additional step of resorting to earlier snapshots of policy parameters depending on the target task due to overfitting to the source task. Then, in order to generate robust policies,we discard the samples that lead to overfitting via strict clipping. Furthermore, we increase the generalization capacity in widely used transfer learning benchmarks by using entropy bonus, different critic methods and curriculum learning in an adversarial setup. Finally, we evaluate the robustness of these techniques and algorithms on simulated robots in target environments where the morphology of the robot, gravity and tangential friction of the environment are altered from the source environment.

READ FULL TEXT

page 9

page 10

page 11

page 12

page 13

page 14

page 15

page 16

research
07/10/2020

Sample-based Regularization: A Transfer Learning Strategy Toward Better Generalization

Training a deep neural network with a small amount of data is a challeng...
research
10/07/2022

Improving Robustness of Deep Reinforcement Learning Agents: Environment Attack based on the Critic Network

To improve policy robustness of deep reinforcement learning agents, a li...
research
07/13/2017

Distral: Robust Multitask Reinforcement Learning

Most deep reinforcement learning algorithms are data inefficient in comp...
research
07/18/2019

Transfer Learning Across Simulated Robots With Different Sensors

For a robot to learn a good policy, it often requires expensive equipmen...
research
02/13/2023

Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning

Tomorrow's robots will need to distinguish useful information from noise...
research
11/22/2019

Fleet Control using Coregionalized Gaussian Process Policy Iteration

In many settings, as for example wind farms, multiple machines are insta...
research
02/24/2020

How Transferable are the Representations Learned by Deep Q Agents?

In this paper, we consider the source of Deep Reinforcement Learning (DR...

Please sign up or login with your details

Forgot password? Click here to reset