DeepAI
Log In Sign Up

Regularization and False Alarms Quantification: Two Sides of the Explainability Coin

12/02/2020
by   Nima Safaei, et al.
0

Regularization is a well-established technique in machine learning (ML) to achieve an optimal bias-variance trade-off which in turn reduces model complexity and enhances explainability. To this end, some hyper-parameters must be tuned, enabling the ML model to accurately fit the unseen data as well as the seen data. In this article, the authors argue that the regularization of hyper-parameters and quantification of costs and risks of false alarms are in reality two sides of the same coin, explainability. Incorrect or non-existent estimation of either quantities undermines the measurability of the economic value of using ML, to the extent that might make it practically useless.

READ FULL TEXT
09/12/2022

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Model explainability has become an important problem in machine learning...
06/14/2021

Model Explainability in Deep Learning Based Natural Language Processing

Machine learning (ML) model explainability has received growing attentio...
09/07/2021

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Understanding the predictions made by machine learning (ML) models and t...
08/10/2021

Harnessing value from data science in business: ensuring explainability and fairness of solutions

The paper introduces concepts of fairness and explainability (XAI) in ar...
06/14/2021

Certification of embedded systems based on Machine Learning: A survey

Advances in machine learning (ML) open the way to innovating functions i...