VisCUIT: Visual Auditor for Bias in CNN Image Classifier

04/12/2022
by   Seongmin Lee, et al.
0

CNN image classifiers are widely used, thanks to their efficiency and accuracy. However, they can suffer from biases that impede their practical applications. Most existing bias investigation techniques are either inapplicable to general image classification tasks or require significant user efforts in perusing all data subgroups to manually specify which data attributes to inspect. We present VisCUIT, an interactive visualization system that reveals how and why a CNN classifier is biased. VisCUIT visually summarizes the subgroups on which the classifier underperforms and helps users discover and characterize the cause of the underperformances by revealing image concepts responsible for activating neurons that contribute to misclassifications. VisCUIT runs in modern browsers and is open-source, allowing people to easily access and extend the tool to other model architectures and datasets. VisCUIT is available at the following public demo link: https://poloclub.github.io/VisCUIT. A video demo is available at https://youtu.be/eNDbSyM4R_4.

READ FULL TEXT

page 2

page 4

page 5

research
05/04/2023

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Diffusion-based generative models' impressive ability to create convinci...
research
04/29/2021

Discover the Unknown Biased Attribute of an Image Classifier

Recent works find that AI algorithms learn biases from data. Therefore, ...
research
06/25/2022

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

As machine learning (ML) systems become increasingly widespread, it is n...
research
12/27/2016

An Automated CNN Recommendation System for Image Classification Tasks

Nowadays the CNN is widely used in practical applications for image clas...
research
04/16/2020

ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets

Machine learning models are known to perpetuate the biases present in th...
research
05/03/2019

DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level

Every year millions of men, women and children are forced to leave their...

Please sign up or login with your details

Forgot password? Click here to reset