Quantifying Transparency of Machine Learning Systems through Analysis of Contributions

07/08/2019
by   Iain Barclay, et al.
0

Increased adoption and deployment of machine learning (ML) models into business, healthcare and other organisational processes, will result in a growing disconnect between the engineers and researchers who developed the models and the model's users and other stakeholders, such as regulators or auditors. This disconnect is inevitable, as models begin to be used over a number of years or are shared among third parties through user communities or via commercial marketplaces, and it will become increasingly difficult for users to maintain ongoing insight into the suitability of the parties who created the model, or the data that was used to train it. This could become problematic, particularly where regulations change and once-acceptable standards become outdated, or where data sources are discredited, perhaps judged to be biased or corrupted, either deliberately or unwittingly. In this paper we present a method for arriving at a quantifiable metric capable of ranking the transparency of the process pipelines used to generate ML models and other data assets, such that users, auditors and other stakeholders can gain confidence that they will be able to validate and trust the data sources and human contributors in the systems that they rely on for their business operations. The methodology for calculating the transparency metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are explained and illustrated through an example scenario.

READ FULL TEXT
research
03/05/2021

A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions

Increased adoption of artificial intelligence (AI) systems into scientif...
research
04/05/2019

A Conceptual Architecture for Contractual Data Sharing in a Decentralised Environment

Machine Learning systems rely on data for training, input and ongoing fe...
research
02/24/2022

XAutoML: A Visual Analytics Tool for Establishing Trust in Automated Machine Learning

In the last ten years, various automated machine learning (AutoML) syste...
research
05/13/2021

Providing Assurance and Scrutability on Shared Data and Machine Learning Models with Verifiable Credentials

Adopting shared data resources requires scientists to place trust in the...
research
04/27/2022

Prescriptive and Descriptive Approaches to Machine-Learning Transparency

Specialized documentation techniques have been developed to communicate ...
research
11/06/2018

"I had a solid theory before but it's falling apart": Polarizing Effects of Algorithmic Transparency

The rise of machine learning has brought closer scrutiny to intelligent ...
research
06/02/2021

Ember: No-Code Context Enrichment via Similarity-Based Keyless Joins

Structured data, or data that adheres to a pre-defined schema, can suffe...

Please sign up or login with your details

Forgot password? Click here to reset