Measuring AI Systems Beyond Accuracy

04/07/2022
by   Violet Turri, et al.
0

Current test and evaluation (T E) methods for assessing machine learning (ML) system performance often rely on incomplete metrics. Testing is additionally often siloed from the other phases of the ML system lifecycle. Research investigating cross-domain approaches to ML T E is needed to drive the state of the art forward and to build an Artificial Intelligence (AI) engineering discipline. This paper advocates for a robust, integrated approach to testing by outlining six key questions for guiding a holistic T E strategy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2022

Confident AI

In this paper, we propose "Confident AI" as a means to designing Artific...
research
03/03/2023

MLTEing Models: Negotiating, Evaluating, and Documenting Model and System Qualities

Many organizations seek to ensure that machine learning (ML) and artific...
research
11/29/2020

Methods Matter: A Trading Agent with No Intelligence Routinely Outperforms AI-Based Traders

There's a long tradition of research using computational intelligence (m...
research
12/22/2021

Machine Learning for Computational Science and Engineering – a brief introduction and some critical questions

Artificial Intelligence (AI) is now entering every sub-field of science,...
research
12/09/2022

Measuring Data

We identify the task of measuring data to quantitatively characterize th...
research
03/22/2021

Special Session: Reliability Analysis for ML/AI Hardware

Artificial intelligence (AI) and Machine Learning (ML) are becoming perv...
research
09/06/2018

Propheticus: Generalizable Machine Learning Framework

Due to recent technological developments, Machine Learning (ML), a subfi...

Please sign up or login with your details

Forgot password? Click here to reset