Learning Multimodal Fixed-Point Weights using Gradient Descent

07/16/2019
by   Lukas Enderich, et al.
4

Due to their high computational complexity, deep neural networks are still limited to powerful processing units. To promote a reduced model complexity by dint of low-bit fixed-point quantization, we propose a gradient-based optimization strategy to generate a symmetric mixture of Gaussian modes (SGM) where each mode belongs to a particular quantization stage. We achieve 2-bit state-of-the-art performance and illustrate the model's ability for self-dependent weight adaptation during training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2020

SYMOG: learning symmetric mixture of Gaussian modes for improved fixed-point quantization

Deep neural networks (DNNs) have been proven to outperform classical met...
research
08/15/2023

Gradient-Based Post-Training Quantization: Challenging the Status Quo

Quantization has become a crucial step for the efficient deployment of d...
research
12/04/2015

Fixed-Point Performance Analysis of Recurrent Neural Networks

Recurrent neural networks have shown excellent performance in many appli...
research
11/25/2021

Joint inference and input optimization in equilibrium networks

Many tasks in deep learning involve optimizing over the inputs to a netw...
research
10/07/2022

A Closer Look at Hardware-Friendly Weight Quantization

Quantizing a Deep Neural Network (DNN) model to be used on a custom acce...
research
02/24/2021

FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism

In this paper, we present a deep reinforcement learning platform named F...
research
11/07/2016

Fixed-point Factorized Networks

In recent years, Deep Neural Networks (DNN) based methods have achieved ...

Please sign up or login with your details

Forgot password? Click here to reset