
Yes, IoU loss is submodular  as a function of the mispredictions
This note is a response to [7] in which it is claimed that [13, Proposit...
read it

Multiclass Classification with an Ensemble of Binary Classification Deep Networks
Deep neural network classifiers have been used frequently and are effici...
read it

Constrained Classification and Ranking via Quantiles
In most machine learning applications, classification accuracy is not th...
read it

DeepTopPush: Simple and Scalable Method for Accuracy at the Top
Accuracy at the top is a special class of binary classification problems...
read it

Backpropagation in matrix notation
In this note we calculate the gradient of the network function in matrix...
read it

Emergent Structures and Lifetime Structure Evolution in Artificial Neural Networks
Motivated by the flexibility of biological neural networks whose connect...
read it

Reduced DilationErosion Perceptron for Binary Classification
Dilation and erosion are two elementary operations from mathematical mor...
read it
A Heaviside Function Approximation for Neural Network Binary Classification
Neural network binary classifiers are often evaluated on metrics like accuracy and F_1Score, which are based on confusion matrix values (True Positives, False Positives, False Negatives, and True Negatives). However, these classifiers are commonly trained with a different loss, e.g. log loss. While it is preferable to perform training on the same loss as the evaluation metric, this is difficult in the case of confusion matrix based metrics because set membership is a step function without a derivative useful for backpropagation. To address this challenge, we propose an approximation of the step function that adheres to the properties necessary for effective training of binary networks using confusion matrix based metrics. This approach allows for endtoend training of binary deep neural classifiers via batch gradient descent. We demonstrate the flexibility of this approach in several applications with varying levels of class imbalance. We also demonstrate how the approximation allows balancing between precision and recall in the appropriate ratio for the task at hand.
READ FULL TEXT
Comments
There are no comments yet.