Explainable, Interpretable Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life

01/17/2023
by   Kazuma Kobayashi, et al.
0

Machine learning (ML) and Artificial Intelligence (AI) are increasingly used in energy and engineering systems, but these models must be fair, unbiased, and explainable. It is critical to have confidence in AI's trustworthiness. ML techniques have been useful in predicting important parameters and improving model performance. However, for these AI techniques to be useful for making decisions, they need to be audited, accounted for, and easy to understand. Therefore, the use of Explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL) in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using AI that is explainable, interpretable, and trustworthy, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. The objective of this paper is to understand the idea of XAI and IML and justify the important role of ML/AI in the Digital Twin framework and components, which requires XAI to understand the prediction better. This paper explains the importance of XAI and IML in both local and global aspects to ensure the use of trustworthy ML/AI applications for RUL prediction. This paper used the RUL prediction for the XAI and IML studies and leveraged the integrated python toolbox for interpretable machine learning (PiML).

READ FULL TEXT

page 5

page 6

page 11

page 13

page 14

page 29

research
04/28/2022

An Explainable Regression Framework for Predicting Remaining Useful Life of Machines

Prediction of a machine's Remaining Useful Life (RUL) is one of the key ...
research
12/18/2017

Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology

Digital pathology is not only one of the most promising fields of diagno...
research
11/12/2021

Explainable AI for Psychological Profiling from Digital Footprints: A Case Study of Big Five Personality Predictions from Spending Data

Every step we take in the digital world leaves behind a record of our be...
research
04/29/2022

Explainable AI via Learning to Optimize

Indecipherable black boxes are common in machine learning (ML), but appl...
research
02/11/2020

Leveraging Rationales to Improve Human Task Performance

Machine learning (ML) systems across many application areas are increasi...
research
04/08/2021

Uncertainty-aware Remaining Useful Life predictor

Remaining Useful Life (RUL) estimation is the problem of inferring how l...
research
11/23/2021

Interpreting Machine Learning Models for Room Temperature Prediction in Non-domestic Buildings

An ensuing challenge in Artificial Intelligence (AI) is the perceived di...

Please sign up or login with your details

Forgot password? Click here to reset