AI and the Everything in the Whole Wide World Benchmark

11/26/2021
by   Inioluwa Deborah Raji, et al.
0

There is a tendency across different subfields in AI to valorize a small collection of influential benchmarks. These benchmarks operate as stand-ins for a range of anointed common problems that are frequently framed as foundational milestones on the path towards flexible and generalizable AI systems. State-of-the-art performance on these benchmarks is widely understood as indicative of progress towards these long-term goals. In this position paper, we explore the limits of such benchmarks in order to reveal the construct validity issues in their framing as the functionally "general" broad measures of progress they are set up to be.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 18

01/17/2021

Understanding in Artificial Intelligence

Current Artificial Intelligence (AI) methods, most based on deep learnin...
02/14/2020

Trustworthy AI

The promise of AI is huge. AI systems have already achieved good enough ...
11/06/2014

The Limitations of Standardized Science Tests as Benchmarks for Artificial Intelligence Research: Position Paper

In this position paper, I argue that standardized tests for elementary s...
05/18/2021

DACBench: A Benchmark Library for Dynamic Algorithm Configuration

Dynamic Algorithm Configuration (DAC) aims to dynamically control a targ...
07/23/2020

The societal and ethical relevance of computational creativity

In this paper, we provide a philosophical account of the value of creati...
05/06/2020

What are the Goals of Distributional Semantics?

Distributional semantic models have become a mainstay in NLP, providing ...
11/20/2018

Analysing Results from AI Benchmarks: Key Indicators and How to Obtain Them

Item response theory (IRT) can be applied to the analysis of the evaluat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.