Bias Amplification in Artificial Intelligence Systems

09/20/2018
by   Kirsten Lloyd, et al.
0

As Artificial Intelligence (AI) technologies proliferate, concern has centered around the long-term dangers of job loss or threats of machines causing harm to humans. All of this concern, however, detracts from the more pertinent and already existing threats posed by AI today: its ability to amplify bias found in training datasets, and swiftly impact marginalized populations at scale. Government and public sector institutions have a responsibility to citizens to establish a dialogue with technology developers and release thoughtful policy around data standards to ensure diverse representation in datasets to prevent bias amplification and ensure that AI systems are built with inclusion in mind.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/14/2018

AAAI FSS-18: Artificial Intelligence in Government and Public Sector Proceedings

Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Gov...
06/12/2020

The Threats of Artificial Intelligence Scale (TAI). Development, Measurement and Test Over Three Application Domains

In recent years Artificial Intelligence (AI) has gained much popularity,...
06/26/2020

Could regulating the creators deliver trustworthy AI?

Is a new regulated profession, such as Artificial Intelligence (AI) Arch...
06/11/2020

Montreal AI Ethics Institute's Response to Scotland's AI Strategy

In January and February 2020, the Scottish Government released two docum...
12/23/2019

Defining AI in Policy versus Practice

Recent concern about harms of information technologies motivate consider...
02/02/2021

The Limits of Global Inclusion in AI Development

Those best-positioned to profit from the proliferation of artificial int...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.