Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

07/04/2022
by   Gianluigi Lopardo, et al.
34

Complex machine learning algorithms are used more and more often in critical tasks involving text data, leading to the development of interpretability methods. Among local methods, two families have emerged: those computing importance scores for each feature and those extracting simple logical rules. In this paper we show that using different methods can lead to unexpectedly different explanations, even when applied to simple models for which we would expect qualitative coincidence. To quantify this effect, we propose a new approach to compare explanations produced by different methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset