Interpretable ML for Imbalanced Data

12/15/2022
by   Damien A. Dablain, et al.
0

Deep learning models are being increasingly applied to imbalanced data in high stakes fields such as medicine, autonomous driving, and intelligence analysis. Imbalanced data compounds the black-box nature of deep networks because the relationships between classes may be highly skewed and unclear. This can reduce trust by model users and hamper the progress of developers of imbalanced learning algorithms. Existing methods that investigate imbalanced data complexity are geared toward binary classification, shallow learning models and low dimensional data. In addition, current eXplainable Artificial Intelligence (XAI) techniques mainly focus on converting opaque deep learning models into simpler models (e.g., decision trees) or mapping predictions for specific instances to inputs, instead of examining global data properties and complexities. Therefore, there is a need for a framework that is tailored to modern deep networks, that incorporates large, high dimensional, multi-class datasets, and uncovers data complexities commonly found in imbalanced data (e.g., class overlap, sub-concepts, and outlier instances). We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance. Our framework also identifies instances that reside on the border of class decision boundaries, which can carry highly discriminative information. Unlike many existing XAI techniques which map model decisions to gray-scale pixel locations, we use saliency through back-propagation to identify and aggregate image color bands across entire classes. Our framework is publicly available at <https://github.com/dd1github/XAI_for_Imbalanced_Learning>

READ FULL TEXT

page 1

page 4

page 8

page 9

page 10

page 12

research
05/05/2021

DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data

Despite over two decades of progress, imbalanced data is still considere...
research
10/24/2018

G-SMOTE: A GMM-based synthetic minority oversampling technique for imbalanced learning

Imbalanced Learning is an important learning algorithm for the classific...
research
05/24/2022

Deep Reinforcement Learning for Multi-class Imbalanced Training

With the rapid growth of memory and computing power, datasets are becomi...
research
01/06/2020

Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning

We investigate learning a ConvNet classifier with class-imbalanced data....
research
08/29/2023

From SMOTE to Mixup for Deep Imbalanced Classification

Given imbalanced data, it is hard to train a good classifier using deep ...
research
04/26/2020

Climate Adaptation: Reliably Predicting from Imbalanced Satellite Data

The utility of aerial imagery (Satellite, Drones) has become an invaluab...
research
03/01/2022

Understanding the Challenges When 3D Semantic Segmentation Faces Class Imbalanced and OOD Data

3D semantic segmentation (3DSS) is an essential process in the creation ...

Please sign up or login with your details

Forgot password? Click here to reset