A Multidimensional Analysis of Social Biases in Vision Transformers

08/03/2023
by   Jannik Brinkmann, et al.
0

The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that models trained using discriminative objectives are less biased than those trained using generative objectives. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness improvements based on model design choices.

READ FULL TEXT
research
03/03/2022

A study on the distribution of social biases in self-supervised learning visual models

Deep neural networks are efficient at learning the data distribution if ...
research
04/04/2020

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Social biases based on gender, race, etc. have been shown to pollute mac...
research
05/15/2023

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

Large language models (LMs) are pretrained on diverse data sources: news...
research
05/02/2020

Social Biases in NLP Models as Barriers for Persons with Disabilities

Building equitable and inclusive NLP technologies demands consideration ...
research
05/05/2022

Optimising Equal Opportunity Fairness in Model Training

Real-world datasets often encode stereotypes and societal biases. Such b...
research
12/22/2019

Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed

Today's machine learning models for computer vision are typically traine...

Please sign up or login with your details

Forgot password? Click here to reset