Picking Up Quantization Steps for Compressed Image Classification

04/21/2023
by   Li Ma, et al.
0

The sensitivity of deep neural networks to compressed images hinders their usage in many real applications, which means classification networks may fail just after taking a screenshot and saving it as a compressed file. In this paper, we argue that neglected disposable coding parameters stored in compressed files could be picked up to reduce the sensitivity of deep neural networks to compressed images. Specifically, we resort to using one of the representative parameters, quantization steps, to facilitate image classification. Firstly, based on quantization steps, we propose a novel quantization aware confidence (QAC), which is utilized as sample weights to reduce the influence of quantization on network training. Secondly, we utilize quantization steps to alleviate the variance of feature distributions, where a quantization aware batch normalization (QABN) is proposed to replace batch normalization of classification networks. Extensive experiments show that the proposed method significantly improves the performance of classification networks on CIFAR-10, CIFAR-100, and ImageNet. The code is released on https://github.com/LiMaPKU/QSAM.git

READ FULL TEXT

page 1

page 2

page 3

page 4

page 7

page 8

page 9

page 14

research
07/01/2019

Weight Normalization based Quantization for Deep Neural Network Compression

With the development of deep neural networks, the size of network models...
research
04/08/2022

Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization

Data-free quantization is a task that compresses the neural network to l...
research
06/12/2023

Efficient Quantization-aware Training with Adaptive Coreset Selection

The expanding model size and computation of deep neural networks (DNNs) ...
research
02/10/2021

BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction

We study the challenging task of neural network quantization without end...
research
07/26/2018

Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks

We establish an equivalence between information bottleneck (IB) learning...
research
05/26/2022

BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Deep neural networks are vulnerable to Trojan attacks. Existing attacks ...
research
11/13/2018

Iteratively Training Look-Up Tables for Network Quantization

Operating deep neural networks on devices with limited resources require...

Please sign up or login with your details

Forgot password? Click here to reset