Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification

11/21/2022
by   Yue Yang, et al.
0

Concept Bottleneck Models (CBM) are inherently interpretable models that factor model decisions into human-readable concepts. They allow people to easily understand why a model is failing, a critical feature for high-stakes applications. CBMs require manually specified concepts and often under-perform their black box counterparts, preventing their broad adoption. We address these shortcomings and are first to show how to construct high-performance CBMs without manual specification of similar accuracy to black box models. Our approach, Language Guided Bottlenecks (LaBo), leverages a language model, GPT-3, to define a large space of possible bottlenecks. Given a problem domain, LaBo uses GPT-3 to produce factual sentences about categories to form candidate concepts. LaBo efficiently searches possible bottlenecks through a novel submodular utility that promotes the selection of discriminative and diverse information. Ultimately, GPT-3's sentential concepts can be aligned to images using CLIP, to form a bottleneck layer. Experiments demonstrate that LaBo is a highly effective prior for concepts important to visual recognition. In the evaluation with 11 diverse datasets, LaBo bottlenecks excel at few-shot classification: they are 11.7 shot and comparable with more data. Overall, LaBo demonstrates that inherently interpretable models can be widely applied at similar, or better, performance than black box approaches.

READ FULL TEXT

page 2

page 8

page 15

page 16

page 17

research
02/27/2022

Interpretable Concept-based Prototypical Networks for Few-Shot Learning

Few-shot learning aims at recognizing new instances from classes with li...
research
11/26/2018

Please Stop Explaining Black Box Models for High Stakes Decisions

There are black box models now being used for high stakes decision-makin...
research
08/21/2023

Sparse Linear Concept Discovery Models

The recent mass adoption of DNNs, even in safety-critical scenarios, has...
research
11/16/2022

Interpretable Few-shot Learning with Online Attribute Selection

Few-shot learning (FSL) is a challenging learning problem in which only ...
research
08/31/2021

PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs

Deep CNNs, though have achieved the state of the art performance in imag...
research
08/23/2023

Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification

Interpretability is a crucial factor in building reliable models for var...
research
03/30/2022

FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations

We present a meta-learning framework for learning new visual concepts qu...

Please sign up or login with your details

Forgot password? Click here to reset