Heterogeneity of AI-Induced Societal Harms and the Failure of Omnibus AI Laws

03/20/2023
by   Sangchul Park, et al.
0

AI-induced societal harms mirror existing problems in domains where AI replaces or complements traditional methodologies. However, trustworthy AI discourses postulate the homogeneity of AI, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such AI monism has spurred legislation for omnibus AI laws requiring any high-risk AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU AI Regulation and the U.S. draft Algorithmic Accountability Act. However, it is irrational to require high-risk or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks, biases, infringements, and privacy problems. Legislators should gradually adapt existing regulations by categorizing AI systems according to the types of societal harms they induce. Accordingly, this paper proposes the following categorizations, subject to ongoing empirical reassessments. First, regarding intelligent agents, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior. Second, regarding discriminative models, law must focus on the mitigation of allocative harms and the disclosure of marginal effects of immutable features. Third, for generative models, law should optimize developer liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive filtering and identify cases where its non-human identity should be disclosed. Lastly, for cognitive models, data protection law should be adapted to effectively address privacy, surveillance, and security problems and facilitate governance built on public-private partnerships.

READ FULL TEXT

page 10

page 15

page 22

research
11/03/2018

Legible Normativity for AI Alignment: The Value of Silly Rules

It has become commonplace to assert that autonomous agents will have to ...
research
08/28/2023

AI Deception: A Survey of Examples, Risks, and Potential Solutions

This paper argues that a range of current AI systems have learned how to...
research
02/05/2023

Regulating ChatGPT and other Large Generative AI Models

Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion...
research
07/26/2023

Acceptable risks in Europe's proposed AI Act: Reasonableness and other principles for deciding how much risk management is enough

This paper critically evaluates the European Commission's proposed AI Ac...
research
03/20/2022

Synergy between 6G and AI: Open Future Horizons and Impending Security Risks

This paper investigates the synergy between 6G and AI. It argues that th...
research
04/01/2022

Designing AI for Online-to-Offline Safety Risks with Young Women: The Context of Social Matching

In this position paper we draw attention to safety risks against youth a...
research
08/30/2023

Is the U.S. Legal System Ready for AI's Challenges to Human Values?

Our interdisciplinary study investigates how effectively U.S. laws confr...

Please sign up or login with your details

Forgot password? Click here to reset