DeepAI AI Chat
Log In Sign Up

Leveling the Playing Field - Fairness in AI Versus Human Game Benchmarks

03/17/2019
by   Rodrigo Canaan, et al.
NYU college
4

From the beginning if the history of AI, there has been interest in games as a platform of research. As the field developed, human-level competence in complex games became a target researchers worked to reach. Only relatively recently has this target been finally met for traditional tabletop games such as Backgammon, Chess and Go. Current research focus has shifted to electronic games, which provide unique challenges. As is often the case with AI research, these results are liable to be exaggerated or misrepresented by either authors or third parties. The extent to which these games benchmark consist of fair competition between human and AI is also a matter of debate. In this work, we review the statements made by authors and third parties in the general media and academic circle about these game benchmark results and discuss factors that can impact the perception of fairness in the contest between humans and machines

READ FULL TEXT

page 1

page 2

page 3

page 4

08/03/2018

The Text-Based Adventure AI Competition

In 2016 and 2017 at the IEEE Conference on Computational Intelligence in...
07/15/2019

The Many AI Challenges of Hearthstone

Games have benchmarked AI methods since the inception of the field, with...
11/15/2021

AI in Games: Techniques, Challenges and Opportunities

With breakthrough of AlphaGo, AI in human-computer game has become a ver...
09/18/2020

AI and Wargaming

Recent progress in Game AI has demonstrated that given enough data from ...
07/06/2020

Towards Game-Playing AI Benchmarks via Performance Reporting Standards

While games have been used extensively as milestones to evaluate game-pl...
08/10/2020

Cross-Platform Games in Kotlin

This demo paper describes a simple and practical approach to writing cro...
06/29/2019

Ludii as a Competition Platform

Ludii is a general game system being developed as part of the ERC-funded...