DeepAI AI Chat
Log In Sign Up

Robust Dual View Deep Agent

by   Ibrahim M. Sobh, et al.

Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom, as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30


page 2

page 7

page 16

page 17

page 18


Robust Dual View Depp Agent

Motivated by recent advance of machine learning using Deep Reinforcement...

Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

Flappy Bird, which has a very high popularity, has been trained in many ...

Superstition in the Network: Deep Reinforcement Learning Plays Deceptive Games

Deep reinforcement learning has learned to play many games well, but fai...

Fully Distributed Actor-Critic Architecture for Multitask Deep Reinforcement Learning

We propose a fully distributed actor-critic architecture, named Diff-DAC...

Intelligent Coordination among Multiple Traffic Intersections Using Multi-Agent Reinforcement Learning

We use Asynchronous Advantage Actor Critic (A3C) for implementing an AI ...

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Multi-simulator training has contributed to the recent success of Deep R...

Multi-Issue Bargaining With Deep Reinforcement Learning

Negotiation is a process where agents aim to work through disputes and m...