Knowledge-intensive Language Understanding for Explainable AI

08/02/2021
by   Amit Sheth, et al.
0

AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by inability to fully trust an AI system that it will not harm a human. Besides the concerns for fairness, privacy, transparency, and explainability are key to developing trusts in AI systems. As stated in describing trustworthy AI "Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand." The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations that are directly related to decision-making similar to how a domain expert makes decisions based on "domain knowledge," that also include well-established, peer-validated explicit guidelines. To understand and validate an AI system's outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.

READ FULL TEXT
research
11/27/2020

Teaching the Machine to Explain Itself using Domain Knowledge

Machine Learning (ML) has been increasingly used to aid humans to make b...
research
01/12/2021

Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making...
research
12/15/2022

Explainable Machine Learning for Hydrocarbon Prospect Risking

Hydrocarbon prospect risking is a critical application in geophysics pre...
research
12/16/2022

It is not "accuracy vs. explainability" – we need both for trustworthy AI systems

We are witnessing the emergence of an AI economy and society where AI te...
research
12/01/2021

AI Assurance using Causal Inference: Application to Public Policy

Developing and implementing AI-based solutions help state and federal go...
research
06/01/2023

Survey of Trustworthy AI: A Meta Decision of AI

When making strategic decisions, we are often confronted with overwhelmi...
research
08/22/2023

Addressing Fairness and Explainability in Image Classification Using Optimal Transport

Algorithmic Fairness and the explainability of potentially unfair outcom...

Please sign up or login with your details

Forgot password? Click here to reset