-
Stochastic Flows and Geometric Optimization on the Orthogonal Group
We present a new class of stochastic, geometrically-driven optimization ...
read it
-
Optimizing Deep Neural Networks via Discretization of Finite-Time Convergent Flows
In this paper, we investigate in the context of deep neural networks, th...
read it
-
Reinforcement Learning with Chromatic Networks
We present a new algorithm for finding compact neural networks encoding ...
read it
-
Structured Evolution with Compact Architectures for Scalable Policy Optimization
We present a new method of blackbox optimization via gradient approximat...
read it
-
Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
We study the convergence of gradient flows related to learning deep line...
read it
-
Multilevel Initialization for Layer-Parallel Deep Neural Network Training
This paper investigates multilevel initialization strategies for trainin...
read it
-
CWY Parametrization for Scalable Learning of Orthogonal and Stiefel Matrices
In this paper we propose a new approach for optimization over orthogonal...
read it
An Ode to an ODE
We present a new paradigm for Neural ODE algorithms, calledODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d). This nested system of two flows, where the parameter-flow is constrained to lie on the compact manifold, provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem which is intrinsically related to training deep neural network architectures such as Neural ODEs. Consequently, it leads to better downstream models, as we show on the example of training reinforcement learning policies with evolution strategies, and in the supervised learning setting, by comparing with previous SOTA baselines. We provide strong convergence results for our proposed mechanism that are independent of the depth of the network, supporting our empirical studies. Our results show an intriguing connection between the theory of deep neural networks and the field of matrix flows on compact manifolds.
READ FULL TEXT
Comments
There are no comments yet.