Autocalibration and Tweedie-dominance for Insurance Pricing with Machine Learning

03/05/2021
by   Michel Denuit, et al.
0

Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts. The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in Wüthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).

READ FULL TEXT
research
03/13/2019

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

Training activation quantized neural networks involves minimizing a piec...
research
02/16/2022

Bias and unfairness in machine learning models: a systematic literature review

One of the difficulties of artificial intelligence is to ensure that mod...
research
01/30/2008

Recursive Bias Estimation and L_2 Boosting

This paper presents a general iterative bias correction procedure for re...
research
07/08/2022

Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent

As part of the effort to understand implicit bias of gradient descent in...
research
07/14/2022

Attention, Filling in The Gaps for Generalization in Routing Problems

Machine Learning (ML) methods have become a useful tool for tackling veh...
research
10/25/2021

Gradient-based Quadratic Multiform Separation

Classification as a supervised learning concept is an important content ...

Please sign up or login with your details

Forgot password? Click here to reset