Compressing audio CNNs with graph centrality based filter pruning

05/05/2023
by   James A King, et al.
0

Convolutional neural networks (CNNs) are commonplace in high-performing solutions to many real-world problems, such as audio classification. CNNs have many parameters and filters, with some having a larger impact on the performance than others. This means that networks may contain many unnecessary filters, increasing a CNN's computation and memory requirements while providing limited performance benefits. To make CNNs more efficient, we propose a pruning framework that eliminates filters with the highest "commonality". We measure this commonality using the graph-theoretic concept of "centrality". We hypothesise that a filter with a high centrality should be eliminated as it represents commonality and can be replaced by other filters without affecting the performance of a network much. An experimental evaluation of the proposed framework is performed on acoustic scene classification and audio tagging. On the DCASE 2021 Task 1A baseline network, our proposed method reduces computations per inference by 71% with 50% fewer parameters at less than a two percentage point drop in accuracy compared to the original network. For large-scale CNNs such as PANNs designed for audio tagging, our method reduces 24% computations per inference with 41% fewer parameters at a slight improvement in performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

A Passive Similarity based CNN Filter Pruning for Efficient Acoustic Scene Classification

We present a method to develop low-complexity convolutional neural netwo...
research
10/27/2022

Efficient Similarity-based Passive Filter Pruning for Compressing CNNs

Convolution neural networks (CNNs) have shown great success in various a...
research
03/15/2021

Lightweight and interpretable neural modeling of an audio distortion effect using hyperconditioned differentiable biquads

In this work, we propose using differentiable cascaded biquads to model ...
research
10/05/2021

Deep Optimization of Parametric IIR Filters for Audio Equalization

This paper describes a novel Deep Learning method for the design of IIR ...
research
10/15/2019

Training CNNs faster with Dynamic Input and Kernel Downsampling

We reduce training time in convolutional networks (CNNs) with a method t...
research
06/10/2019

DCASE 2019: CNN depth analysis with different channel inputs for Acoustic Scene Classification

The objective of this technical report is to describe the framework used...
research
03/15/2022

Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs

Unstructured pruning is well suited to reduce the memory footprint of co...

Please sign up or login with your details

Forgot password? Click here to reset