Visual Tracking by means of Deep Reinforcement Learning and an Expert Demonstrator

09/18/2019
by   Matteo Dunnhofer, et al.
0

In the last decade many different algorithms have been proposed to track a generic object in videos. Their execution on recent large-scale video datasets can produce a great amount of various tracking behaviours. New trends in Reinforcement Learning showed that demonstrations of an expert agent can be efficiently used to speed-up the process of policy learning. Taking inspiration from such works and from the recent applications of Reinforcement Learning to visual tracking, we propose two novel trackers, A3CT, which exploits demonstrations of a state-of-the-art tracker to learn an effective tracking policy, and A3CTD, that takes advantage of the same expert tracker to correct its behaviour during tracking. Through an extensive experimental validation on the GOT-10k, OTB-100, LaSOT, UAV123 and VOT benchmarks, we show that the proposed trackers achieve state-of-the-art performance while running in real-time.

READ FULL TEXT
research
07/08/2020

A Distilled Model for Tracking and Tracker Fusion

Visual object tracking was generally tackled by reasoning independently ...
research
07/02/2019

Dynamic Face Video Segmentation via Reinforcement Learning

For real-time semantic video segmentation, most recent works utilise a d...
research
10/13/2019

Hierarchical Feature-Aware Tracking

In this paper, we propose a hierarchical feature-aware tracking framewor...
research
04/06/2016

Learning to Track at 100 FPS with Deep Regression Networks

Machine learning techniques are often used in computer vision due to the...
research
07/17/2017

Tracking as Online Decision-Making: Learning a Policy from Streaming Videos with Reinforcement Learning

We formulate tracking as an online decision-making process, where a trac...
research
04/01/2023

Tracker: Model-based Reinforcement Learning for Tracking Control of Human Finger Attached with Thin McKibben Muscles

To adopt the soft hand exoskeleton to support activities of daily living...
research
02/14/2019

Long and Short Memory Balancing in Visual Co-Tracking using Q-Learning

Employing one or more additional classifiers to break the self-learning ...

Please sign up or login with your details

Forgot password? Click here to reset