Mirror descent of Hopfield model

11/29/2022
by   Hyungjoon Soh, et al.
0

Mirror descent is a gradient descent method that uses a dual space of parametric models. The great idea has been developed in convex optimization, but not yet widely applied in machine learning. In this study, we provide a possible way that the mirror descent can help data-driven parameter initialization of neural networks. We adopt the Hopfield model as a prototype of neural networks, we demonstrate that the mirror descent can train the model more effectively than the usual gradient descent with random parameter initialization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2019

Global Convergence of Gradient Descent for Deep Linear Residual Networks

We analyze the global convergence of gradient descent for deep linear re...
research
12/22/2020

Learning to Initialize Gradient Descent Using Gradient Descent

Non-convex optimization problems are challenging to solve; the success a...
research
05/26/2022

A framework for overparameterized learning

An explanation for the success of deep neural networks is a central ques...
research
05/31/2023

Parameter-free projected gradient descent

We consider the problem of minimizing a convex function over a closed co...
research
11/25/2021

Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)

Despite their massive success, training successful deep neural networks ...
research
02/06/2023

Optimization using Parallel Gradient Evaluations on Multiple Parameters

We propose a first-order method for convex optimization, where instead o...
research
01/28/2020

MSE-Optimal Neural Network Initialization via Layer Fusion

Deep neural networks achieve state-of-the-art performance for a range of...

Please sign up or login with your details

Forgot password? Click here to reset