Seven challenges for harmonizing explainability requirements

08/11/2021
by   Jiahao Chen, et al.
5

Regulators have signalled an interest in adopting explainable AI(XAI) techniques to handle the diverse needs for model governance, operational servicing, and compliance in the financial services industry. In this short overview, we review the recent technical literature in XAI and argue that based on our current understanding of the field, the use of XAI techniques in practice necessitate a highly contextualized approach considering the specific needs of stakeholders for particular business applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2018

Fair lending needs explainable models for responsible recommendation

The financial services industry has unique explainability and fairness c...
research
06/01/2023

Rethinking Model Evaluation as Narrowing the Socio-Technical Gap

The recent development of generative and large language models (LLMs) po...
research
04/08/2021

Question-Driven Design Process for Explainable AI User Experiences

A pervasive design issue of AI systems is their explainability–how to pr...
research
08/11/2021

Ontology drift is a challenge for explainable data governance

We introduce the needs for explainable AI that arise from Standard No. 2...
research
09/29/2019

Explainable Clustering and Application to Wealth Management Compliance

Many applications from the financial industry successfully leverage clus...
research
08/21/2023

A Modular and Adaptive System for Business Email Compromise Detection

The growing sophistication of Business Email Compromise (BEC) and spear ...
research
10/09/2020

Towards Self-Regulating AI: Challenges and Opportunities of AI Model Governance in Financial Services

AI systems have found a wide range of application areas in financial ser...

Please sign up or login with your details

Forgot password? Click here to reset