Enhancing Audio Augmentation Methods with Consistency Learning

02/09/2021
by   Turab Iqbal, et al.
0

Data augmentation is an inexpensive way to increase training data diversity and is commonly achieved via transformations of existing data. For tasks such as classification, there is a good case for learning representations of the data that are invariant to such transformations, yet this is not explicitly enforced by classification losses such as the cross-entropy loss. This paper investigates the use of training objectives that explicitly impose this consistency constraint and how it can impact downstream audio classification tasks. In the context of deep convolutional neural networks in the supervised setting, we show empirically that certain measures of consistency are not implicitly captured by the cross-entropy loss and that incorporating such measures into the loss function can improve the performance of audio classification systems. Put another way, we demonstrate how existing augmentation methods can further improve learning by enforcing consistency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2019

Audiogmenter: a MATLAB Toolbox for Audio Data Augmentation

Audio data augmentation is a key step in training deep neural networks f...
research
06/02/2022

Introducing One Sided Margin Loss for Solving Classification Problems in Deep Networks

This paper introduces a new loss function, OSM (One-Sided Margin), to so...
research
11/27/2020

Statistical theory for image classification using deep convolutional neural networks with cross-entropy loss

Convolutional neural networks learned by minimizing the cross-entropy lo...
research
07/26/2023

Regularizing Neural Networks with Meta-Learning Generative Models

This paper investigates methods for improving generative data augmentati...
research
10/27/2021

Generalizing AUC Optimization to Multiclass Classification for Audio Segmentation With Limited Training Data

Area under the ROC curve (AUC) optimisation techniques developed for neu...
research
11/03/2020

Loss Bounds for Approximate Influence-Based Abstraction

Sequential decision making techniques hold great promise to improve the ...
research
04/18/2023

Robustness of Visual Explanations to Common Data Augmentation

As the use of deep neural networks continues to grow, understanding thei...

Please sign up or login with your details

Forgot password? Click here to reset