DeepAI AI Chat
Log In Sign Up

Optimizing Shallow Networks for Binary Classification

by   Kalliopi Basioti, et al.

Data driven classification that relies on neural networks is based on optimization criteria that involve some form of distance between the output of the network and the desired label. Using the same mathematical mathematical analysis, for a multitude of such measures, we can show that their optimum solution matches the ideal likelihood ratio test classifier. In this work we introduce a different family of optimization problems which is not covered by the existing approaches and, therefore, opens possibilities for new training algorithms for neural network based classification. We give examples that lead to algorithms that are simple in implementation, exhibit stable convergence characteristics and are antagonistic to the most popular existing techniques.


page 1

page 2

page 3

page 4


Training Neural Networks for Likelihood/Density Ratio Estimation

Various problems in Engineering and Statistics require the computation o...

Designing GANs: A Likelihood Ratio Approach

We are interested in the design of generative adversarial networks. The ...

A generalized quadratic loss for SVM and Deep Neural Networks

We consider some supervised binary classification tasks and a regression...

Proper likelihood ratio based ROC curves for general binary classification problems

Everybody writes that ROC curves, a very common tool in binary classific...

Collaborative Multidisciplinary Design Optimization with Neural Networks

The design of complex engineering systems leads to solving very large op...