Is explainable AI a race against model complexity?

05/17/2022
by   Advait Sarkar, et al.
0

Explaining the behaviour of intelligent systems will get increasingly and perhaps intractably challenging as models grow in size and complexity. We may not be able to expect an explanation for every prediction made by a brain-scale model, nor can we expect explanations to remain objective or apolitical. Our functionalist understanding of these models is of less advantage than we might assume. Models precede explanations, and can be useful even when both model and explanation are incorrect. Explainability may never win the race against complexity, but this is less problematic than it seems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2022

A psychological theory of explainability

The goal of explainable Artificial Intelligence (XAI) is to generate hum...
research
06/26/2021

Explanatory Pluralism in Explainable AI

The increasingly widespread application of AI models motivates increased...
research
08/13/2019

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...
research
12/10/2019

Toward XAI for Intelligent Tutoring Systems: A Case Study

Our research is a step toward understanding when explanations of AI-driv...
research
08/18/2023

Explaining the Arts: Toward a Framework for Matching Creative Tasks with Appropriate Explanation Mediums

Although explainable computational creativity seeks to create and sustai...
research
07/11/2022

On Computing Relevant Features for Explaining NBCs

Despite the progress observed with model-agnostic explainable AI (XAI), ...
research
03/07/2022

Robustness and Usefulness in AI Explanation Methods

Explainability in machine learning has become incredibly important as ma...

Please sign up or login with your details

Forgot password? Click here to reset