Interpretable Graph Capsule Networks for Object Recognition

12/03/2020
by   Jindong Gu, et al.
0

Capsule Networks, as alternatives to Convolutional Neural Networks, have been proposed to recognize objects from images. The current literature demonstrates many advantages of CapsNets over CNNs. However, how to create explanations for individual classifications of CapsNets has not been well explored. The widely used saliency methods are mainly proposed for explaining CNN-based classifications; they create saliency map explanations by combining activation values and the corresponding gradients, e.g., Grad-CAM. These saliency methods require a specific architecture of the underlying classifiers and cannot be trivially applied to CapsNets due to the iterative routing mechanism therein. To overcome the lack of interpretability, we can either propose new post-hoc interpretation methods for CapsNets or modifying the model to have build-in explanations. In this work, we explore the latter. Specifically, we propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach. In the proposed model, individual classification explanations can be created effectively and efficiently. Our model also demonstrates some unexpected benefits, even though it replaces the fundamental part of CapsNets. Our GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets. Besides, GraCapsNets also keep other advantages of CapsNets, namely, disentangled representations and affine transformation robustness.

READ FULL TEXT

page 5

page 6

page 7

research
07/03/2019

Attention routing between capsules

In this paper, we propose a new capsule network architecture called Atte...
research
11/18/2019

Improving the Robustness of Capsule Networks to Image Affine Transformations

Convolutional neural networks (CNNs) achieve translational invariance us...
research
05/04/2023

Evaluating Post-hoc Interpretability with Intrinsic Interpretability

Despite Convolutional Neural Networks having reached human-level perform...
research
05/27/2019

Analyzing the Interpretability Robustness of Self-Explaining Models

Recently, interpretable models called self-explaining models (SEMs) have...
research
12/18/2019

P-CapsNets: a General Form of Convolutional Neural Networks

We propose Pure CapsNets (P-CapsNets) which is a generation of normal CN...
research
04/11/2021

Deformable Capsules for Object Detection

Capsule networks promise significant benefits over convolutional network...
research
03/29/2021

Capsule Network is Not More Robust than Convolutional Network

The Capsule Network is widely believed to be more robust than Convolutio...

Please sign up or login with your details

Forgot password? Click here to reset