DeepAI AI Chat
Log In Sign Up

Empirical Evaluation of Deep Learning Model Compression Techniques on the WaveNet Vocoder

11/20/2020
by   Sam Davis, et al.
0

WaveNet is a state-of-the-art text-to-speech vocoder that remains challenging to deploy due to its autoregressive loop. In this work we focus on ways to accelerate the original WaveNet architecture directly, as opposed to modifying the architecture, such that the model can be deployed as part of a scalable text-to-speech system. We survey a wide variety of model compression techniques that are amenable to deployment on a range of hardware platforms. In particular, we compare different model sparsity methods and levels, and seven widely used precisions as targets for quantization; and are able to achieve models with a compression ratio of up to 13.84 without loss in audio fidelity compared to a dense, single-precision floating-point baseline. All techniques are implemented using existing open source deep learning frameworks and libraries to encourage their wider adoption.

READ FULL TEXT
01/17/2020

FRaZ: A Generic High-Fidelity Fixed-Ratio Lossy Compression Framework for Scientific Floating-point Data

With ever-increasing volumes of scientific floating-point data being pro...
05/12/2017

CLBlast: A Tuned OpenCL BLAS Library

This work demonstrates how to accelerate dense linear algebra computatio...
04/20/2020

Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation

Quantization techniques can reduce the size of Deep Neural Networks and ...
09/16/2020

MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework

With the tremendous success of deep learning, there exists imminent need...
04/30/2021

Stealthy Backdoors as Compression Artifacts

In a backdoor attack on a machine learning model, an adversary produces ...
10/11/2022

Deep learning model compression using network sensitivity and gradients

Deep learning model compression is an improving and important field for ...