Implicit Feature Decoupling with Depthwise Quantization

03/15/2022
by   Iordanis Fostiropoulos, et al.
10

Quantization has been applied to multiple domains in Deep Neural Networks (DNNs). We propose Depthwise Quantization (DQ) where quantization is applied to a decomposed sub-tensor along the feature axis of weak statistical dependence. The feature decomposition leads to an exponential increase in representation capacity with a linear increase in memory and parameter cost. In addition, DQ can be directly applied to existing encoder-decoder frameworks without modification of the DNN architecture. We use DQ in the context of Hierarchical Auto-Encoder and train end-to-end on an image feature representation. We provide an analysis on cross-correlation between spatial and channel features and we propose a decomposition of the image feature representation along the channel axis. The improved performance of the depthwise operator is due to the increased representation capacity from implicit feature decoupling. We evaluate DQ on the likelihood estimation task, where it outperforms the previous state-of-the-art on CIFAR-10, ImageNet-32 and ImageNet-64. We progressively train with increasing image size a single hierarchical model that uses 69 than the previous works.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 15

page 16

page 17

page 18

research
11/20/2017

End-to-end Trained CNN Encode-Decoder Networks for Image Steganography

All the existing image steganography methods use manually crafted featur...
research
06/30/2023

Feature Representation Learning for NL2SQL Generation Based on Coupling and Decoupling

The NL2SQL task involves parsing natural language statements into SQL qu...
research
08/15/2018

DNN Feature Map Compression using Learned Representation over GF(2)

In this paper, we introduce a method to compress intermediate feature ma...
research
06/12/2023

Efficient Quantization-aware Training with Adaptive Coreset Selection

The expanding model size and computation of deep neural networks (DNNs) ...
research
06/15/2022

FRCRN: Boosting Feature Representation using Frequency Recurrence for Monaural Speech Enhancement

Convolutional recurrent networks (CRN) integrating a convolutional encod...
research
06/16/2019

Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval

Product Quantization (PQ) has long been a mainstream for generating an e...
research
04/18/2021

Multi-scale Self-calibrated Network for Image Light Source Transfer

Image light source transfer (LLST), as the most challenging task in the ...

Please sign up or login with your details

Forgot password? Click here to reset