A Channel-Pruned and Weight-Binarized Convolutional Neural Network for Keyword Spotting

09/12/2019
by   Jiancheng Lyu, et al.
0

We study channel number reduction in combination with weight binarization (1-bit weight precision) to trim a convolutional neural network for a keyword spotting (classification) task. We adopt a group-wise splitting method based on the group Lasso penalty to achieve over 50 the network performance within 0.25 three-stage procedure to balance accuracy and sparsity in network training.

READ FULL TEXT
research
01/24/2019

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

ShuffleNet is a state-of-the-art light weight convolutional neural netwo...
research
08/13/2020

Weight Equalizing Shift Scaler-Coupled Post-training Quantization

Post-training, layer-wise quantization is preferable because it is free ...
research
03/01/2021

SWIS – Shared Weight bIt Sparsity for Efficient Neural Network Acceleration

Quantization is spearheading the increase in performance and efficiency ...
research
04/09/2022

Channel Pruning In Quantization-aware Training: An Adaptive Projection-gradient Descent-shrinkage-splitting Method

We propose an adaptive projection-gradient descent-shrinkage-splitting m...
research
12/17/2019

ℓ_0 Regularized Structured Sparsity Convolutional Neural Networks

Deepening and widening convolutional neural networks (CNNs) significantl...
research
10/30/2018

JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental Analysis

Used for simple commands recognition on devices from smart routers to mo...
research
11/07/2018

Median Binary-Connect Method and a Binary Convolutional Neural Nework for Word Recognition

We propose and study a new projection formula for training binary weight...

Please sign up or login with your details

Forgot password? Click here to reset