A Model for Evaluating Algorithmic Systems Accountability

07/12/2018
by   Yiannis Kanellopoulos, et al.
0

Algorithmic systems make decisions that have a great impact in our lives. As our dependency on them is growing so does the need for transparency and holding them accountable. This paper presents a model for evaluating how transparent these systems are by focusing on their algorithmic part as well as the maturity of the organizations that utilize them. We applied this model on a classification algorithm created and utilized by a large financial institution. The results of our analysis indicated that the organization was only partially in control of their algorithm and they lacked the necessary benchmark to interpret the deducted results and assess the validity of its inferencing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2020

Algorithmic Transparency with Strategic Users

Should firms that apply machine learning algorithms in their decision-ma...
research
12/06/2019

An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists

A wave of recent scholarship documenting the discriminatory harms of alg...
research
03/20/2023

Dynamic Documentation for AI Systems

AI documentation is a rapidly-growing channel for coordinating the desig...
research
02/05/2018

Proposed Spreadsheet Transparency Definition and Measures

Auditors demand financial models be transparent yet no consensus exists ...
research
06/03/2022

The Algorithmic Imprint

When algorithmic harms emerge, a reasonable response is to stop using th...
research
11/06/2018

Progressive Disclosure: Designing for Effective Transparency

As we increasingly delegate important decisions to intelligent systems, ...
research
09/18/2020

Examining the Impact of Algorithm Awareness on Wikidata's Recommender System Recoin

The global infrastructure of the Web, designed as an open and transparen...

Please sign up or login with your details

Forgot password? Click here to reset