DeepAI AI Chat
Log In Sign Up

AI and the Everything in the Whole Wide World Benchmark

by   Inioluwa Deborah Raji, et al.

There is a tendency across different subfields in AI to valorize a small collection of influential benchmarks. These benchmarks operate as stand-ins for a range of anointed common problems that are frequently framed as foundational milestones on the path towards flexible and generalizable AI systems. State-of-the-art performance on these benchmarks is widely understood as indicative of progress towards these long-term goals. In this position paper, we explore the limits of such benchmarks in order to reveal the construct validity issues in their framing as the functionally "general" broad measures of progress they are set up to be.


Mapping global dynamics of benchmark creation and saturation in artificial intelligence

Benchmarks are crucial to measuring and steering progress in artificial ...

Benchmarks for Automated Commonsense Reasoning: A Survey

More than one hundred benchmarks have been developed to test the commons...

Trustworthy AI

The promise of AI is huge. AI systems have already achieved good enough ...

Exploring Non-Verbal Predicates in Semantic Role Labeling: Challenges and Opportunities

Although we have witnessed impressive progress in Semantic Role Labeling...

What are the Goals of Distributional Semantics?

Distributional semantic models have become a mainstay in NLP, providing ...

Analysing Results from AI Benchmarks: Key Indicators and How to Obtain Them

Item response theory (IRT) can be applied to the analysis of the evaluat...

The Limitations of Standardized Science Tests as Benchmarks for Artificial Intelligence Research: Position Paper

In this position paper, I argue that standardized tests for elementary s...