Vector Quantization for Machine Vision

03/30/2016
by   Vincenzo Liguori, et al.
0

This paper shows how to reduce the computational cost for a variety of common machine vision tasks by operating directly in the compressed domain, particularly in the context of hardware acceleration. Pyramid Vector Quantization (PVQ) is the compression technique of choice and its properties are exploited to simplify Support Vector Machines (SVM), Convolutional Neural Networks(CNNs), Histogram of Oriented Gradients (HOG) features, interest points matching and other algorithms.

READ FULL TEXT
research
11/24/2019

Pyramid Vector Quantization and Bit Level Sparsity in Weights for Efficient Neural Networks Inference

This paper discusses three basic blocks for the inference of convolution...
research
04/10/2017

Pyramid Vector Quantization for Deep Learning

This paper explores the use of Pyramid Vector Quantization (PVQ) to redu...
research
05/24/2022

Wavelet Feature Maps Compression for Image-to-Image CNNs

Convolutional Neural Networks (CNNs) are known for requiring extensive c...
research
08/19/2019

Adaptative Inference Cost With Convolutional Neural Mixture Models

Despite the outstanding performance of convolutional neural networks (CN...
research
05/21/2018

Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints

We consider the optimization of deep convolutional neural networks (CNNs...
research
10/10/2021

Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks

Graph Convolutional Networks (GCNs) are widely used in a variety of appl...
research
10/29/2020

Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks

Compressing large neural networks is an important step for their deploym...

Please sign up or login with your details

Forgot password? Click here to reset