Speciesist bias in AI – How AI applications perpetuate discrimination and unfair outcomes against animals

by   Thilo Hagendorff, et al.

Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the 'speciesist bias' and investigate it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This can only be changed when AI fairness frameworks widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, and especially on farmed animals.


page 7

page 8


Fairness Score and Process Standardization: Framework for Fairness Certification in Artificial Intelligence Systems

Decisions made by various Artificial Intelligence (AI) systems greatly i...

Data quality dimensions for fair AI

AI systems are not intrinsically neutral and biases trickle in any type ...

Adding guardrails to advanced chatbots

Generative AI models continue to become more powerful. The launch of Cha...

Fairness via AI: Bias Reduction in Medical Information

Most Fairness in AI research focuses on exposing biases in AI systems. A...

Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions

Artificial Intelligence (AI) has been used extensively in automatic deci...

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

With the aim of studying how current multimodal AI algorithms based on h...

A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis

The problem of algorithmic bias in machine learning has gained a lot of ...

Please sign up or login with your details

Forgot password? Click here to reset