ROAM: Recurrently Optimizing Tracking Model

07/28/2019
by   Tianyu Yang, et al.
0

Online updating a tracking model to adapt to object appearance variations is challenging. For SGD-based model optimization, using a large learning rate may help to converge the model faster but has the risk of letting the loss wander wildly. Thus traditional optimization methods usually choose a relatively small learning rate and iterate for more steps to converge the model, which is time-consuming. In this paper, we propose to offline train a recurrent neural optimizer to predict an adaptive learning rate for model updating in a meta-learning setting, which can converge the model in a few gradient steps. This substantially improves the convergence speed of updating the tracking model, while achieving better performance. Moreover, we also propose a simple yet effective training trick called Random Filter Scaling to prevent overfitting, which boosts the performance greatly. Finally, we extensively evaluate our tracker, ROAM, on the OTB, VOT, GOT-10K, TrackingNet and LaSOT benchmark and our method performs favorably against state-of-the-art algorithms.

READ FULL TEXT

page 3

page 5

research
12/24/2019

CProp: Adaptive Learning Rate Scaling from Past Gradient Conformity

Most optimizers including stochastic gradient descent (SGD) and its adap...
research
03/20/2020

Event-Based Control for Online Training of Neural Networks

Convolutional Neural Network (CNN) has become the most used method for i...
research
11/30/2019

Learning Rate Dropout

The performance of a deep neural network is highly dependent on its trai...
research
11/25/2019

Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning

We propose a novel meta-learning framework for real-time object tracking...
research
02/17/2021

POLA: Online Time Series Prediction by Adaptive Learning Rates

Online prediction for streaming time series data has practical use for m...
research
03/06/2018

Understanding Short-Horizon Bias in Stochastic Meta-Optimization

Careful tuning of the learning rate, or even schedules thereof, can be c...
research
11/01/2019

Does Adam optimizer keep close to the optimal point?

The adaptive optimizer for training neural networks has continually evol...

Please sign up or login with your details

Forgot password? Click here to reset