RNN Training along Locally Optimal Trajectories via Frank-Wolfe Algorithm

10/12/2020
by   Yun Yue, et al.
0

We propose a novel and efficient training method for RNNs by iteratively seeking a local minima on the loss surface within a small region, and leverage this directional vector for the update, in an outer-loop. We propose to utilize the Frank-Wolfe (FW) algorithm in this context. Although, FW implicitly involves normalized gradients, which can lead to a slow convergence rate, we develop a novel RNN training method that, surprisingly, even with the additional cost, the overall training cost is empirically observed to be lower than back-propagation. Our method leads to a new Frank-Wolfe method, that is in essence an SGD algorithm with a restart scheme. We prove that under certain conditions our algorithm has a sublinear convergence rate of O(1/ϵ) for ϵ error. We then conduct empirical experiments on several benchmark datasets including those that exhibit long-term dependencies, and show significant performance improvement. We also experiment with deep RNN architectures and show efficient training performance. Finally, we demonstrate that our training method is robust to noisy data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2022

Revisiting Outer Optimization in Adversarial Training

Despite the fundamental distinction between adversarial and natural trai...
research
03/02/2019

Equilibrated Recurrent Neural Network: Neuronal Time-Delayed Self-Feedback Improves Accuracy and Stability

We propose a novel Equilibrated Recurrent Neural Network (ERNN) to comb...
research
03/07/2020

RNN-based Online Learning: An Efficient First-Order Optimization Algorithm with a Convergence Guarantee

We investigate online nonlinear regression with continually running recu...
research
04/01/2017

Faster Subgradient Methods for Functions with Hölderian Growth

The purpose of this manuscript is to derive new convergence results for ...
research
02/01/2013

Sparse Multiple Kernel Learning with Geometric Convergence Rate

In this paper, we study the problem of sparse multiple kernel learning (...
research
03/02/2023

Why (and When) does Local SGD Generalize Better than SGD?

Local SGD is a communication-efficient variant of SGD for large-scale tr...

Please sign up or login with your details

Forgot password? Click here to reset