On Quantizing Implicit Neural Representations

09/01/2022
by   Cameron Gordon, et al.
12

The role of quantization within implicit/coordinate neural networks is still not fully understood. We note that using a canonical fixed quantization scheme during training produces poor performance at low-rates due to the network weight distributions changing over the course of training. In this work, we show that a non-uniform quantization of neural weights can lead to significant improvements. Specifically, we demonstrate that a clustered quantization enables improved reconstruction. Finally, by characterising a trade-off between quantization and network capacity, we demonstrate that it is possible (while memory inefficient) to reconstruct signals using binary neural networks. We demonstrate our findings experimentally on 2D image reconstruction and 3D radiance fields; and show that simple quantization methods and architecture search can achieve compression of NeRF to less than 16kb with minimal loss in performance (323x smaller than the original NeRF).

READ FULL TEXT

page 1

page 6

page 7

page 8

research
07/15/2020

Finding Non-Uniform Quantization Schemes usingMulti-Task Gaussian Processes

We propose a novel method for neural network quantization that casts the...
research
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
research
07/13/2022

Learning Representations for CSI Adaptive Quantization and Feedback

In this work, we propose an efficient method for channel state informati...
research
11/14/2019

The Canonical Distortion Measure for Vector Quantization and Function Approximation

To measure the quality of a set of vector quantization points a means of...
research
09/20/2023

SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network Quantization

Quantization is a widely used compression method that effectively reduce...
research
08/15/2023

Gradient-Based Post-Training Quantization: Challenging the Status Quo

Quantization has become a crucial step for the efficient deployment of d...
research
06/14/2018

Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization

We propose a simple and easy to implement neural network compression alg...

Please sign up or login with your details

Forgot password? Click here to reset