Visual Sensor Network Reconfiguration with Deep Reinforcement Learning

08/13/2018
by   Paul Jasek, et al.
0

We present an approach for reconfiguration of dynamic visual sensor networks with deep reinforcement learning (RL). Our RL agent uses a modified asynchronous advantage actor-critic framework and the recently proposed Relational Network module at the foundation of its network architecture. To address the issue of sample inefficiency in current approaches to model-free reinforcement learning, we train our system in an abstract simulation environment that represents inputs from a dynamic scene. Our system is validated using inputs from a real-world scenario and preexisting object detection and tracking algorithms.

READ FULL TEXT
research
02/04/2016

Asynchronous Methods for Deep Reinforcement Learning

We propose a conceptually simple and lightweight framework for deep rein...
research
06/05/2018

Relational Deep Reinforcement Learning

We introduce an approach for deep reinforcement learning (RL) that impro...
research
09/16/2016

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning

Two less addressed issues of deep reinforcement learning are (1) lack of...
research
07/19/2018

FuzzerGym: A Competitive Framework for Fuzzing and Learning

Fuzzing is a commonly used technique designed to test software by automa...
research
03/23/2020

Importance of using appropriate baselines for evaluation of data-efficiency in deep reinforcement learning for Atari

Reinforcement learning (RL) has seen great advancements in the past few ...
research
06/19/2022

Dynamic Routing for Navigation in Changing Unknown Maps Using Deep Reinforcement Learning

In this work, we propose an approach for an autonomous agent that learns...
research
10/20/2016

Utilization of Deep Reinforcement Learning for saccadic-based object visual search

The paper focuses on the problem of learning saccades enabling visual ob...

Please sign up or login with your details

Forgot password? Click here to reset