DeepAI AI Chat
Log In Sign Up

A Seven-Layer Model for Standardising AI Fairness Assessment

by   Avinash Agarwal, et al.

Problem statement: Standardisation of AI fairness rules and benchmarks is challenging because AI fairness and other ethical requirements depend on multiple factors such as context, use case, type of the AI system, and so on. In this paper, we elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its usage, and that all stages require due attention for mitigating AI bias. We need a standardised approach to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot research topic, a holistic strategy for AI fairness is generally missing. Most researchers focus only on a few facets of AI model-building. Peer review shows excessive focus on biases in the datasets, fairness metrics, and algorithmic bias. In the process, other aspects affecting AI fairness get ignored. The solution proposed: We propose a comprehensive approach in the form of a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling. Despite the differences in the various aspects, most AI systems have similar model-building stages. The proposed model splits the AI system lifecycle into seven abstraction layers, each corresponding to a well-defined AI model-building or usage stage. We also provide checklists for each layer and deliberate on potential sources of bias in each layer and their mitigation methodologies. This work will facilitate layer-wise standardisation of AI fairness rules and benchmarking parameters.


page 1

page 2

page 3

page 4


Bias, Fairness, and Accountability with AI and ML Algorithms

The advent of AI and ML algorithms has led to opportunities as well as c...

On Fairness and Interpretability

Ethical AI spans a gamut of considerations. Among these, the most popula...

Speciesist bias in AI – How AI applications perpetuate discrimination and unfair outcomes against animals

Massive efforts are made to reduce biases in both data and algorithms in...

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Developing robust and fair AI systems require datasets with comprehensiv...

A comprehensive analysis of soccer penalty shootout designs

The standard design of soccer penalty shootouts has received serious cri...

Getting Fairness Right: Towards a Toolbox for Practitioners

The potential risk of AI systems unintentionally embedding and reproduci...

The Managerial Effects of Algorithmic Fairness Activism

How do ethical arguments affect AI adoption in business? We randomly exp...