Interpretabilité des modèles : état des lieux des méthodes et application à l'assurance

07/25/2020
by   Dimitri Delcaillau, et al.
0

Since May 2018, the General Data Protection Regulation (GDPR) has introduced new obligations to industries. By setting a legal framework, it notably imposes strong transparency on the use of personal data. Thus, people must be informed of the use of their data and must consent the usage of it. Data is the raw material of many models which today make it possible to increase the quality and performance of digital services. Transparency on the use of data also requires a good understanding of its use through different models. The use of models, even if efficient, must be accompanied by an understanding at all levels of the process that transform data (upstream and downstream of a model), thus making it possible to define the relationships between the individual's data and the choice that an algorithm could make based on the analysis of the latter. (For example, the recommendation of one product or one promotional offer or an insurance rate representative of the risk.) Models users must ensure that models do not discriminate against and that it is also possible to explain its result. The widening of the panel of predictive algorithms - made possible by the evolution of computing capacities – leads scientists to be vigilant about the use of models and to consider new tools to better understand the decisions deduced from them . Recently, the community has been particularly active on model transparency with a marked intensification of publications over the past three years. The increasingly frequent use of more complex algorithms (deep learning, Xgboost, etc.) presenting attractive performances is undoubtedly one of the causes of this interest. This article thus presents an inventory of methods of interpreting models and their uses in an insurance context.

READ FULL TEXT

page 24

page 31

research
09/01/2022

Model Transparency and Interpretability : Survey and Application to the Insurance Industry

The use of models, even if efficient, must be accompanied by an understa...
research
09/08/2022

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Predictive systems, in particular machine learning algorithms, can take ...
research
05/16/2023

Rethinking People Analytics With Inverse Transparency by Design

Employees work in increasingly digital environments that enable advanced...
research
09/25/2018

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

Machine learning algorithms aim at minimizing the number of false decisi...
research
08/08/2023

The Inverse Transparency Toolchain: A Fully Integrated and Quickly Deployable Data Usage Logging Infrastructure

Inverse transparency is created by making all usages of employee data vi...
research
10/30/2018

Bibliometrics-based heuristics: What is their definition and how can they be studied?

Paradoxically, bibliometric indicators (i.e., publications and citation ...
research
04/05/2019

Improving Scientific Article Visibility by Neural Title Simplification

The rapidly growing amount of data that scientific content providers sho...

Please sign up or login with your details

Forgot password? Click here to reset