DeepAI AI Chat
Log In Sign Up

BiaScope: Visual Unfairness Diagnosis for Graph Embeddings

10/12/2022
by   Agapi Rissaki, et al.
Northeastern University
Tina Eliassi
0

The issue of bias (i.e., systematic unfairness) in machine learning models has recently attracted the attention of both researchers and practitioners. For the graph mining community in particular, an important goal toward algorithmic fairness is to detect and mitigate bias incorporated into graph embeddings since they are commonly used in human-centered applications, e.g., social-media recommendations. However, simple analytical methods for detecting bias typically involve aggregate statistics which do not reveal the sources of unfairness. Instead, visual methods can provide a holistic fairness characterization of graph embeddings and help uncover the causes of observed bias. In this work, we present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings. The tool is the product of a design study in collaboration with domain experts. It allows the user to (i) visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology. Experts' feedback confirms that our tool is effective at detecting and diagnosing unfairness. Thus, we envision our tool both as a companion for researchers in designing their algorithms as well as a guide for practitioners who use off-the-shelf graph embeddings.

READ FULL TEXT

page 1

page 6

page 7

12/10/2021

A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions

In a world of daily emerging scientific inquisition and discovery, the p...
06/25/2022

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

As machine learning (ML) systems become increasingly widespread, it is n...
11/16/2016

Embedding Projector: Interactive Visualization and Interpretation of Embeddings

Embeddings are ubiquitous in machine learning, appearing in recommender ...
04/06/2021

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

Word vector embeddings have been shown to contain and amplify biases in ...
01/07/2020

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

AI algorithms are not immune to biases. Traditionally, non-experts have ...
10/26/2021

Managing Bias in Human-Annotated Data: Moving Beyond Bias Removal

Due to the widespread use of data-powered systems in our everyday lives,...