VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven Model Interpretability Applied to the Ironmaking Industry

07/15/2020
by   Cedric Schockaert, et al.
0

Machine learning applied to generate data-driven models are lacking of transparency leading the process engineer to lose confidence in relying on the model predictions to optimize his industrial process. Bringing processes in the industry to a certain level of autonomy using data-driven models is particularly challenging as the first user of those models, is the expert in the process with often decades of experience. It is necessary to expose to the process engineer, not solely the model predictions, but also their interpretability. To that end, several approaches have been proposed in the literature. The Local Interpretable Model-agnostic Explanations (LIME) method has gained a lot of interest from the research community recently. The principle of this method is to train a linear model that is locally approximating the black-box model, by generating randomly artificial data points locally. Model-agnostic local interpretability solutions based on LIME have recently emerged to improve the original method. We present in this paper a novel approach, VAE-LIME, for local interpretability of data-driven models forecasting the temperature of the hot metal produced by a blast furnace. Such ironmaking process data is characterized by multivariate time series with high inter-correlation representing the underlying process in a blast furnace. Our contribution is to use a Variational Autoencoder (VAE) to learn the complex blast furnace process characteristics from the data. The VAE is aiming at generating optimal artificial samples to train a local interpretable model better representing the black-box model in the neighborhood of the input sample processed by the black-box model to make a prediction. In comparison with LIME, VAE-LIME is showing a significantly improved local fidelity of the local interpretable linear model with the black-box model resulting in robust model interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
09/23/2019

Model-Agnostic Linear Competitors – When Interpretable Models Compete and Collaborate with Black-Box Models

Driven by an increasing need for model interpretability, interpretable m...
research
09/26/2019

RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling

Understanding black-box machine learning models is important towards the...
research
03/24/2022

Explainable Artificial Intelligence for Exhaust Gas Temperature of Turbofan Engines

Data-driven modeling is an imperative tool in various industrial applica...
research
03/20/2020

Locally Interpretable Predictions of Parkinson's Disease Progression

In precision medicine, machine learning techniques have been commonly pr...
research
07/14/2020

Model-Agnostic Interpretable and Data-driven suRRogates suited for highly regulated industries

Highly regulated industries, like banking and insurance, ask for transpa...

Please sign up or login with your details

Forgot password? Click here to reset