Benny Avelin

is this you? claim profile

0

  • Neural ODEs as the Deep Limit of ResNets with constant weights

    In this paper we prove that, in the deep limit, the stochastic gradient descent on a ResNet type deep neural network, where each layer share the same weight matrix, converges to the stochastic gradient descent for a Neural ODE and that the corresponding value/loss functions converge. Our result gives, in the context of minimization by stochastic gradient descent, a theoretical foundation for considering Neural ODEs as the deep limit of ResNets. Our proof is based on certain decay estimates for associated Fokker-Planck equations.

    06/28/2019 ∙ by Benny Avelin, et al. ∙ 23 share

    read it