Explainable Machine Learning for Hydrocarbon Prospect Risking

12/15/2022
by   Ahmad Mustafa, et al.
0

Hydrocarbon prospect risking is a critical application in geophysics predicting well outcomes from a variety of data including geological, geophysical, and other information modalities. Traditional routines require interpreters to go through a long process to arrive at the probability of success of specific outcomes. AI has the capability to automate the process but its adoption has been limited thus far owing to a lack of transparency in the way complicated, black box models generate decisions. We demonstrate how LIME – a model-agnostic explanation technique – can be used to inject trust in model decisions by uncovering the model's reasoning process for individual predictions. It generates these explanations by fitting interpretable models in the local neighborhood of specific datapoints being queried. On a dataset of well outcomes and corresponding geophysical attribute data, we show how LIME can induce trust in model's decisions by revealing the decision-making process to be aligned to domain knowledge. Further, it has the potential to debug mispredictions made due to anomalous patterns in the data or faulty training datasets.

READ FULL TEXT
research
07/16/2021

Explainable AI Enabled Inspection of Business Process Prediction Models

Modern data analytics underpinned by machine learning techniques has bec...
research
04/29/2019

Why should you trust my interpretation? Understanding uncertainty in LIME predictions

Methods for interpreting machine learning black-box models increase the ...
research
04/16/2022

Data-Centric Distrust Quantification for Responsible AI: When Data-driven Outcomes Are Not Reliable

At the same time that AI and machine learning are becoming central to hu...
research
08/02/2021

Knowledge-intensive Language Understanding for Explainable AI

AI systems have seen significant adoption in various domains. At the sam...
research
12/05/2020

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...
research
11/09/2022

Accountable and Explainable Methods for Complex Reasoning over Text

A major concern of Machine Learning (ML) models is their opacity. They a...
research
10/25/2018

Promoting Distributed Trust in Machine Learning and Computational Simulation via a Blockchain Network

Policy decisions are increasingly dependent on the outcomes of simulatio...

Please sign up or login with your details

Forgot password? Click here to reset