ILMART: Interpretable Ranking with Constrained LambdaMART

06/01/2022
by   Claudio Lucchese, et al.
0

Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models. While most of the previous research efforts focus on creating post-hoc explanations, in this paper we investigate how to train effective and intrinsically-interpretable ranking models. Developing these models is particularly challenging and it also requires finding a trade-off between ranking quality and model complexity. State-of-the-art rankers, made of either large ensembles of trees or several neural layers, exploit in fact an unlimited number of feature interactions making them black boxes. Previous approaches on intrinsically-interpretable ranking models address this issue by avoiding interactions between features thus paying a significant performance drop with respect to full-complexity models. Conversely, ILMART, our novel and interpretable LtR solution based on LambdaMART, is able to train effective and intelligible models by exploiting a limited and controlled number of pairwise feature interactions. Exhaustive and reproducible experiments conducted on three publicly-available LtR datasets show that ILMART outperforms the current state-of-the-art solution for interpretable ranking of a large margin with a gain of nDCG of up to 8

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2020

Interpretable Learning-to-Rank with Generalized Additive Models

Interpretability of learning-to-rank models is a crucial yet relatively ...
research
07/01/2023

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

Explainable Artificial Intelligence (XAI) plays a crucial role in enabli...
research
06/29/2018

Posthoc Interpretability of Learning to Rank Models using Secondary Training Data

Predictive models are omnipresent in automated and assisted decision mak...
research
12/15/2020

Explainable Recommendation Systems by Generalized Additive Models with Manifest and Latent Interactions

In recent years, the field of recommendation systems has attracted incre...
research
02/28/2019

SAFE ML: Surrogate Assisted Feature Extraction for Model Learning

Complex black-box predictive models may have high accuracy, but opacity ...
research
05/13/2022

Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study

An important limitation to the development of AI-based solutions for In ...
research
03/25/2021

Interpretable Approximation of High-Dimensional Data

In this paper we apply the previously introduced approximation method ba...

Please sign up or login with your details

Forgot password? Click here to reset