Performance, Opaqueness, Consequences, and Assumptions: Simple questions for responsible planning of machine learning solutions

08/21/2022
by   Przemyslaw Biecek, et al.
0

The data revolution has generated a huge demand for data-driven solutions. This demand propels a growing number of easy-to-use tools and training for aspiring data scientists that enable the rapid building of predictive models. Today, weapons of math destruction can be easily built and deployed without detailed planning and validation. This rapidly extends the list of AI failures, i.e. deployments that lead to financial losses or even violate democratic values such as equality, freedom and justice. The lack of planning, rules and standards around the model development leads to the ,,anarchisation of AI". This problem is reported under different names such as validation debt, reproducibility crisis, and lack of explainability. Post-mortem analysis of AI failures often reveals mistakes made in the early phase of model development or data acquisition. Thus, instead of curing the consequences of deploying harmful models, we shall prevent them as early as possible by putting more attention to the initial planning stage. In this paper, we propose a quick and simple framework to support planning of AI solutions. The POCA framework is based on four pillars: Performance, Opaqueness, Consequences, and Assumptions. It helps to set the expectations and plan the constraints for the AI solution before any model is built and any data is collected. With the help of the POCA method, preliminary requirements can be defined for the model-building process, so that costly model misspecification errors can be identified as soon as possible or even avoided. AI researchers, product owners and business analysts can use this framework in the initial stages of building AI solutions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2021

Planning for Natural Language Failures with the AI Playbook

Prototyping AI user experiences is challenging due in part to probabilis...
research
05/19/2021

AI and Ethics – Operationalising Responsible AI

In the last few years, AI continues demonstrating its positive impact on...
research
06/03/2023

AlerTiger: Deep Learning for AI Model Health Monitoring at LinkedIn

Data-driven companies use AI models extensively to develop products and ...
research
04/29/2023

Optimizing the AI Development Process by Providing the Best Support Environment

The purpose of this study is to investigate the development process for ...
research
12/28/2020

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

The increasing amount of available data, computing power, and the consta...
research
10/09/2018

Building a Reproducible Machine Learning Pipeline

Reproducibility of modeling is a problem that exists for any machine lea...
research
07/21/2023

Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development

Despite large progress in Explainable and Safe AI, practitioners suffer ...

Please sign up or login with your details

Forgot password? Click here to reset