Automatic Concept Extraction for Concept Bottleneck-based Video Classification

06/21/2022
by   Jeya Vikranth Jeyakumar, et al.
0

Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-species classification. However, these concept bottleneck models rely on a necessary and sufficient set of predefined concepts-which is intractable for complex tasks such as video classification. For complex tasks, the labels and the relationship between visual elements span many frames, e.g., identifying a bird flying or catching prey-necessitating concepts with various levels of abstraction. To this end, we present CoDEx, an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification. CoDEx identifies a rich set of complex concept abstractions from natural language explanations of videos-obviating the need to predefine the amorphous set of concepts. To demonstrate our method's viability, we construct two new public datasets that combine existing complex video classification datasets with short, crowd-sourced natural language explanations for their labels. Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.

READ FULL TEXT
research
08/12/2020

We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos

Identifying common patterns among events is a key ability in human and m...
research
07/09/2020

Concept Bottleneck Models

We seek to learn models that we can interact with using high-level conce...
research
08/23/2023

Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification

Interpretability is a crucial factor in building reliable models for var...
research
07/04/2023

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Classifiers tend to learn a false causal relationship between an over-re...
research
02/28/2023

A Closer Look at the Intervention Procedure of Concept Bottleneck Models

Concept bottleneck models (CBMs) are a class of interpretable neural net...
research
06/13/2017

The "something something" video database for learning and evaluating visual common sense

Neural networks trained on datasets such as ImageNet have led to major a...
research
11/21/2022

Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions

The Concept Bottleneck Models (CBMs) of Koh et al. [2020] provide a mean...

Please sign up or login with your details

Forgot password? Click here to reset