MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence

07/29/2021
by   Stanisław Gizinski, et al.
0

The growing number of AI applications, also for high-stake decisions, increases the interest in Explainable and Interpretable Machine Learning (XI-ML). This trend can be seen both in the increasing number of regulations and strategies for developing trustworthy AI and the growing number of scientific papers dedicated to this topic. To ensure the sustainable development of AI, it is essential to understand the dynamics of the impact of regulation on research papers as well as the impact of scientific discourse on AI-related policies. This paper introduces a novel framework for joint analysis of AI-related policy documents and eXplainable Artificial Intelligence (XAI) research papers. The collected documents are enriched with metadata and interconnections, using various NLP methods combined with a methodology inspired by Institutional Grammar. Based on the information extracted from collected documents, we showcase a series of analyses that help understand interactions, similarities, and differences between documents at different stages of institutionalization. To the best of our knowledge, this is the first work to use automatic language analysis tools to understand the dynamics between XI-ML methods and regulations. We believe that such a system contributes to better cooperation between XAI researchers and AI policymakers.

READ FULL TEXT
research
10/28/2021

AI Federalism: Shaping AI Policy within States in Germany

Recent AI governance research has focused heavily on the analysis of str...
research
05/15/2023

Artificial intelligence to advance Earth observation: a perspective

Earth observation (EO) is a prime instrument for monitoring land and oce...
research
08/02/2021

On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems (Technical Report)

With the availability of large datasets and ever-increasing computing po...
research
11/29/2020

Methods Matter: A Trading Agent with No Intelligence Routinely Outperforms AI-Based Traders

There's a long tradition of research using computational intelligence (m...
research
06/14/2021

Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI

Many ML models are opaque to humans, producing decisions too complex for...
research
06/23/2022

Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

In the last decade, a great number of organizations have produced docume...

Please sign up or login with your details

Forgot password? Click here to reset