Global Convergence Rate of Deep Equilibrium Models with General Activations

02/11/2023
by   Lan V. Truong, et al.
0

In a recent paper, Ling et al. investigated the over-parametrized Deep Equilibrium Model (DEQ) with ReLU activation and proved that the gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. In this paper, we show that this fact still holds for DEQs with any general activation which has bounded first and second derivatives. Since the new activation function is generally non-linear, a general population Gram matrix is designed, and a new form of dual activation with Hermite polynomial expansion is developed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset