Learning Activation Functions for Sparse Neural Networks

05/18/2023
by   Mohammad Loni, et al.
0

Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning a novel way to tune activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15.53 LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios. Our code can be found at https://github.com/automl/SAFS

READ FULL TEXT
research
02/17/2020

Evolutionary Optimization of Deep Learning Activation Functions

The choice of activation function can have a large effect on the perform...
research
02/05/2022

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

Random pruning is arguably the most naive way to attain sparsity in neur...
research
10/16/2017

Searching for Activation Functions

The choice of activation functions in deep networks has a significant ef...
research
04/08/2021

Learning specialized activation functions with the Piecewise Linear Unit

The choice of activation functions is crucial for modern deep neural net...
research
12/18/2020

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on

Virtual try-on has garnered interest as a neural rendering benchmark tas...
research
12/09/2022

AP: Selective Activation for De-sparsifying Pruned Neural Networks

The rectified linear unit (ReLU) is a highly successful activation funct...
research
02/02/2021

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

Training sparse networks to converge to the same performance as dense ne...

Please sign up or login with your details

Forgot password? Click here to reset