Augmentations: An Insight into their Effectiveness on Convolution Neural Networks

05/09/2022
by   Sabeesh Ethiraj, et al.
0

Augmentations are the key factor in determining the performance of any neural network as they provide a model with a critical edge in boosting its performance. Their ability to boost a model's robustness depends on two factors, viz-a-viz, the model architecture, and the type of augmentations. Augmentations are very specific to a dataset, and it is not imperative that all kinds of augmentation would necessarily produce a positive effect on a model's performance. Hence there is a need to identify augmentations that perform consistently well across a variety of datasets and also remain invariant to the type of architecture, convolutions, and the number of parameters used. Hence there is a need to identify augmentations that perform consistently well across a variety of datasets and also remain invariant to the type of architecture, convolutions, and the number of parameters used. This paper evaluates the effect of parameters using 3x3 and depth-wise separable convolutions on different augmentation techniques on MNIST, FMNIST, and CIFAR10 datasets. Statistical Evidence shows that techniques such as Cutouts and Random horizontal flip were consistent on both parametrically low and high architectures. Depth-wise separable convolutions outperformed 3x3 convolutions at higher parameters due to their ability to create deeper networks. Augmentations resulted in bridging the accuracy gap between the 3x3 and depth-wise separable convolutions, thus establishing their role in model generalization. At higher number augmentations did not produce a significant change in performance. The synergistic effect of multiple augmentations at higher parameters, with antagonistic effect at lower parameters, was also evaluated. The work proves that a delicate balance between architectural supremacy and augmentations needs to be achieved to enhance a model's performance in any given deep learning task.

READ FULL TEXT
research
08/08/2022

Efficient Neural Net Approaches in Metal Casting Defect Detection

One of the most pressing challenges prevalent in the steel manufacturing...
research
12/12/2018

Concentrated-Comprehensive Convolutions for lightweight semantic segmentation

The semantic segmentation requires a lot of computational cost. The dila...
research
05/27/2021

FuSeConv: Fully Separable Convolutions for Fast Inference on Systolic Arrays

Both efficient neural networks and hardware accelerators are being explo...
research
06/09/2017

Depthwise Separable Convolutions for Neural Machine Translation

Depthwise separable convolutions reduce the number of parameters and com...
research
10/02/2020

Rotated Ring, Radial and Depth Wise Separable Radial Convolutions

Simple image rotations significantly reduce the accuracy of deep neural ...
research
08/23/2021

Separable Convolutions for Optimizing 3D Stereo Networks

Deep learning based 3D stereo networks give superior performance compare...
research
09/30/2018

Benchmarks of ResNet Architecture for Atrial Fibrillation Classification

In this work we apply variations of ResNet architecture to the task of a...

Please sign up or login with your details

Forgot password? Click here to reset