DeepAI AI Chat
Log In Sign Up

AI and the Everything in the Whole Wide World Benchmark

11/26/2021
by   Inioluwa Deborah Raji, et al.
0

There is a tendency across different subfields in AI to valorize a small collection of influential benchmarks. These benchmarks operate as stand-ins for a range of anointed common problems that are frequently framed as foundational milestones on the path towards flexible and generalizable AI systems. State-of-the-art performance on these benchmarks is widely understood as indicative of progress towards these long-term goals. In this position paper, we explore the limits of such benchmarks in order to reveal the construct validity issues in their framing as the functionally "general" broad measures of progress they are set up to be.

READ FULL TEXT
03/09/2022

Mapping global dynamics of benchmark creation and saturation in artificial intelligence

Benchmarks are crucial to measuring and steering progress in artificial ...
02/09/2023

Benchmarks for Automated Commonsense Reasoning: A Survey

More than one hundred benchmarks have been developed to test the commons...
02/14/2020

Trustworthy AI

The promise of AI is huge. AI systems have already achieved good enough ...
07/04/2023

Exploring Non-Verbal Predicates in Semantic Role Labeling: Challenges and Opportunities

Although we have witnessed impressive progress in Semantic Role Labeling...
05/06/2020

What are the Goals of Distributional Semantics?

Distributional semantic models have become a mainstay in NLP, providing ...
11/20/2018

Analysing Results from AI Benchmarks: Key Indicators and How to Obtain Them

Item response theory (IRT) can be applied to the analysis of the evaluat...
11/06/2014

The Limitations of Standardized Science Tests as Benchmarks for Artificial Intelligence Research: Position Paper

In this position paper, I argue that standardized tests for elementary s...