Deforming the Loss Surface to Affect the Behaviour of the Optimizer

09/14/2020
by   Liangming Chen, et al.
28

In deep learning, it is usually assumed that the optimization process is conducted on a shape-fixed loss surface. Differently, we first propose a novel concept of deformation mapping in this paper to affect the behaviour of the optimizer. Vertical deformation mapping (VDM), as a type of deformation mapping, can make the optimizer enter a flat region, which often implies better generalization performance. Moreover, we design various VDMs, and further provide their contributions to the loss surface. After defining the local M region, theoretical analyses show that deforming the loss surface can enhance the gradient descent optimizer's ability to filter out sharp minima. With visualizations of loss landscapes, we evaluate the flatnesses of minima obtained by both the original optimizer and optimizers enhanced by VDMs on CIFAR-100. The experimental results show that VDMs do find flatter regions. Moreover, we compare popular convolutional neural networks enhanced by VDMs with the corresponding original ones on ImageNet, CIFAR-10, and CIFAR-100. The results are surprising: there are significant improvements on all of the involved models equipped with VDMs. For example, the top-1 test accuracy of ResNet-20 on CIFAR-100 increases by 1.46 computational overhead.

READ FULL TEXT
research
07/24/2020

Deforming the Loss Surface

In deep learning, it is usually assumed that the shape of the loss surfa...
research
01/22/2021

Gravity Optimizer: a Kinematic Approach on Optimization in Deep Learning

We introduce Gravity, another algorithm for gradient-based optimization....
research
06/16/2020

Directional Pruning of Deep Neural Networks

In the light of the fact that the stochastic gradient descent (SGD) ofte...
research
05/27/2022

Sharpness-Aware Training for Free

Modern deep neural networks (DNNs) have achieved state-of-the-art perfor...
research
02/14/2018

L4: Practical loss-based stepsize adaptation for deep learning

We propose a stepsize adaptation scheme for stochastic gradient descent....
research
03/14/2017

Learned Optimizers that Scale and Generalize

Learning to learn has emerged as an important direction for achieving ar...
research
10/21/2019

An Alternative Surrogate Loss for PGD-based Adversarial Testing

Adversarial testing methods based on Projected Gradient Descent (PGD) ar...

Please sign up or login with your details

Forgot password? Click here to reset