HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

11/23/2021
by   Huanrui Yang, et al.
0

With the recent demand of deploying neural network models on mobile and edge devices, it is desired to improve the model's generalizability on unseen testing data, as well as enhance the model's robustness under fixed-point quantization for efficient deployment. Minimizing the training loss, however, provides few guarantees on the generalization and quantization performance. In this work, we fulfill the need of improving generalization and quantization performance simultaneously by theoretically unifying them under the framework of improving the model's robustness against bounded weight perturbation and minimizing the eigenvalues of the Hessian matrix with respect to model weights. We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance. HERO enables up to a 3.8 training label perturbation, and the best post-training quantization accuracy across a wide range of precision, including a >10 SGD-trained models for common model architectures on various datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2023

EPTQ: Enhanced Post-Training Quantization via Label-Free Hessian

Quantization of deep neural networks (DNN) has become a key element in t...
research
07/25/2023

QuIP: 2-Bit Quantization of Large Language Models With Guarantees

This work studies post-training parameter quantization in large language...
research
11/10/2019

HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks

Quantization is an effective method for reducing memory footprint and in...
research
08/01/2023

MRQ:Support Multiple Quantization Schemes through Model Re-Quantization

Despite the proliferation of diverse hardware accelerators (e.g., NPU, T...
research
08/15/2023

Gradient-Based Post-Training Quantization: Challenging the Status Quo

Quantization has become a crucial step for the efficient deployment of d...
research
02/18/2020

Robust Quantization: One Model to Rule Them All

Neural network quantization methods often involve simulating the quantiz...
research
06/12/2023

Resource Efficient Neural Networks Using Hessian Based Pruning

Neural network pruning is a practical way for reducing the size of train...

Please sign up or login with your details

Forgot password? Click here to reset