Loss Functions for Behavioral Game Theory

06/07/2023
by   Greg d'Eon, et al.
0

Behavioral game theorists all use experimental data to evaluate predictive models of human behavior. However, they differ greatly in their choice of loss function for these evaluations, with error rate, negative log-likelihood, cross-entropy, Brier score, and L2 error all being common choices. We attempt to offer a principled answer to the question of which loss functions make sense for this task, formalizing desiderata that we argue loss functions should satisfy. We construct a family of loss functions, which we dub "diagonal bounded Bregman divergences", that satisfy all of these axioms and includes the squared L2 error. In fact, the squared L2 error is the only acceptable loss that is relatively commonly used in practice; we thus recommend its continued use to behavioral game theorists.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

Hybridised Loss Functions for Improved Neural Network Generalisation

Loss functions play an important role in the training of artificial neur...
research
05/27/2019

Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization

As the complexity of neural network models has grown, it has become incr...
research
06/27/2012

Minimizing The Misclassification Error Rate Using a Surrogate Convex Loss

We carefully study how well minimizing convex surrogate loss functions, ...
research
07/07/2021

On Codomain Separability and Label Inference from (Noisy) Loss Functions

Machine learning classifiers rely on loss functions for performance eval...
research
10/25/2022

Bit Error and Block Error Rate Training for ML-Assisted Communication

Even though machine learning (ML) techniques are being widely used in co...
research
06/06/2019

Toward a Characterization of Loss Functions for Distribution Learning

In this work we study loss functions for learning and evaluating probabi...
research
07/21/2021

Memorization in Deep Neural Networks: Does the Loss Function matter?

Deep Neural Networks, often owing to the overparameterization, are shown...

Please sign up or login with your details

Forgot password? Click here to reset