Interpreting Complex Regression Models

02/26/2018
by   Noa Avigdor-Elgrabli, et al.
0

Interpretation of a machine learning induced models is critical for feature engineering, debugging, and, arguably, compliance. Yet, best of breed machine learning models tend to be very complex. This paper presents a method for model interpretation which has the main benefit that the simple interpretations it provides are always grounded in actual sets of learning examples. The method is validated on the task of interpreting a complex regression model in the context of both an academic problem -- predicting the year in which a song was recorded and an industrial one -- predicting mail user churn.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2020

Pitfalls to Avoid when Interpreting Machine Learning Models

Modern requirements for machine learning (ML) models include both high p...
research
08/22/2018

Model Interpretation: A Unified Derivative-based Framework for Nonparametric Regression and Supervised Machine Learning

Interpreting a nonparametric regression model with many predictors is kn...
research
08/17/2023

LLM-FuncMapper: Function Identification for Interpreting Complex Clauses in Building Codes via LLM

As a vital stage of automated rule checking (ARC), rule interpretation o...
research
02/24/2023

Detection of anomalously emitting ships through deviations from predicted TROPOMI NO2 retrievals

Starting from 2021, more demanding NO_x emission restrictions were intro...
research
10/24/2022

Predicting the Citation Count and CiteScore of Journals One Year in Advance

Prediction of the future performance of academic journals is a task that...
research
04/06/2023

Retention Is All You Need

Skilled employees are usually seen as the most important pillar of an or...
research
12/17/2021

Interpreting Audiograms with Multi-stage Neural Networks

Audiograms are a particular type of line charts representing individuals...

Please sign up or login with your details

Forgot password? Click here to reset