Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions

06/24/2018
by   Junru Wu, et al.
0

The current trend of pushing CNNs deeper with convolutions has created a pressing demand to achieve higher compression gains on CNNs where convolutions dominate the computation and parameter amount (e.g., GoogLeNet, ResNet and Wide ResNet). Further, the high energy consumption of convolutions limits its deployment on mobile devices. To this end, we proposed a simple yet effective scheme for compressing convolutions though applying k-means clustering on the weights, compression is achieved through weight-sharing, by only recording K cluster centers and weight assignment indexes. We then introduced a novel spectrally relaxed k-means regularization, which tends to make hard assignments of convolutional layer weights to K learned cluster centers during re-training. We additionally propose an improved set of metrics to estimate energy consumption of CNN hardware implementations, whose estimation results are verified to be consistent with previously proposed energy estimation tool extrapolated from actual hardware measurements. We finally evaluated Deep k-Means across several CNN models in terms of both compression ratio and energy consumption reduction, observing promising results without incurring accuracy loss. The code is available at https://github.com/Sandbox3aster/Deep-K-Means

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2016

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning

Deep convolutional neural networks (CNNs) are indispensable to state-of-...
research
06/08/2020

EDCompress: Energy-Aware Model Compression with Dataflow

Edge devices demand low energy consumption, cost and small form factor. ...
research
06/17/2020

Optimizing Grouped Convolutions on Edge Devices

When deploying a deep neural network on constrained hardware, it is poss...
research
10/01/2018

Extended Bit-Plane Compression for Convolutional Neural Network Accelerators

After the tremendous success of convolutional neural networks in image c...
research
11/01/2017

Minimum Energy Quantized Neural Networks

This work targets the automated minimum-energy optimization of Quantized...
research
07/23/2019

RRNet: Repetition-Reduction Network for Energy Efficient Decoder of Depth Estimation

We introduce Repetition-Reduction network (RRNet) for resource-constrain...
research
07/31/2022

A Local-Ratio-Based Power Control Approach for Capacitated Access Points in Mobile Edge Computing

Terminal devices (TDs) connect to networks through access points (APs) i...

Please sign up or login with your details

Forgot password? Click here to reset