DeepAI AI Chat
Log In Sign Up

Robust Dual View Depp Agent

by   Ibrahim M. Sobh, et al.

Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom, as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30


page 2

page 7

page 16

page 17

page 18


Robust Dual View Deep Agent

Motivated by recent advance of machine learning using Deep Reinforcement...

Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

Flappy Bird, which has a very high popularity, has been trained in many ...

Visual Transfer between Atari Games using Competitive Reinforcement Learning

This paper explores the use of deep reinforcement learning agents to tra...

Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning

We propose a fully distributed actor-critic algorithm approximated by de...

Intelligent Coordination among Multiple Traffic Intersections Using Multi-Agent Reinforcement Learning

We use Asynchronous Advantage Actor Critic (A3C) for implementing an AI ...

Asynchronous training of quantum reinforcement learning

The development of quantum machine learning (QML) has received a lot of ...

Co-design of Embodied Neural Intelligence via Constrained Evolution

We introduce a novel co-design method for autonomous moving agents' shap...