A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions

03/05/2021
by   Iain Barclay, et al.
0

Increased adoption of artificial intelligence (AI) systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows. This could quickly become problematic, particularly where guidance or regulations change and once-acceptable best practice becomes outdated, or where data sources are later discredited as biased or inaccurate. This paper presents a novel method for deriving a quantifiable metric capable of ranking the overall transparency of the process pipelines used to generate AI systems, such that users, auditors and other stakeholders can gain confidence that they will be able to validate and trust the data sources and contributors in the AI systems that they rely on. The methodology for calculating the metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are evaluated through models published at ModelHub and PyTorch Hub, popular archives for sharing science resources, and is found to be helpful in driving consideration of the contributions made to generating AI systems and approaches towards effective documentation and improving transparency in machine learning assets shared within scientific communities.

READ FULL TEXT
research
07/08/2019

Quantifying Transparency of Machine Learning Systems through Analysis of Contributions

Increased adoption and deployment of machine learning (ML) models into b...
research
05/13/2021

Providing Assurance and Scrutability on Shared Data and Machine Learning Models with Verifiable Credentials

Adopting shared data resources requires scientists to place trust in the...
research
08/31/2023

Science Communications for Explainable Artificial Intelligence

Artificial Intelligence (AI) has a communication problem. XAI methods ha...
research
12/21/2021

Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events

Recent advances in artificial intelligence applied to biomedical text ar...
research
12/13/2019

AutoAIViz: Opening the Blackbox of Automated Artificial Intelligence with Conditional Parallel Coordinates

Artificial Intelligence (AI) can now automate the algorithm selection, f...
research
12/15/2020

A Systematic Mapping Study in AIOps

IT systems of today are becoming larger and more complex, rendering thei...
research
04/05/2019

Improving Scientific Article Visibility by Neural Title Simplification

The rapidly growing amount of data that scientific content providers sho...

Please sign up or login with your details

Forgot password? Click here to reset