Complex Clipping for Improved Generalization in Machine Learning

02/27/2023
by   Les Atlas, et al.
0

For many machine learning applications, a common input representation is a spectrogram. The underlying representation for a spectrogram is a short time Fourier transform (STFT) which gives complex values. The spectrogram uses the magnitude of these complex values, a commonly used detector. Modern machine learning systems are commonly overparameterized, where possible ill-conditioning problems are ameliorated by regularization. The common use of rectified linear unit (ReLU) activation functions between layers of a deep net has been shown to help this regularization, improving system performance. We extend this idea of ReLU activation to detection for the complex STFT, providing a simple-to-compute modified and regularized spectrogram, which potentially results in better behaved training. We then confirmed the benefit of this approach on a noisy acoustic data set used for a real-world application. Generalization performance improved substantially. This approach might benefit other applications which use time-frequency mappings, for acoustic, audio, and other applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2020

Effects of the Nonlinearity in Activation Functions on the Performance of Deep Learning Models

The nonlinearity of activation functions used in deep learning models ar...
research
07/08/2019

Copula Representations and Error Surface Projections for the Exclusive Or Problem

The exclusive or (xor) function is one of the simplest examples that ill...
research
05/25/2023

Neural Characteristic Activation Value Analysis for Improved ReLU Network Feature Learning

We examine the characteristic activation values of individual ReLU units...
research
07/03/2023

Neural Polytopes

We find that simple neural networks with ReLU activation generate polyto...
research
03/19/2018

Deep learning improved by biological activation functions

`Biologically inspired' activation functions, such as the logistic sigmo...
research
03/07/2020

AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks

The large capacity of neural networks enables them to learn complex func...

Please sign up or login with your details

Forgot password? Click here to reset