Sparse Unsupervised Capsules Generalize Better

04/17/2018
by   David Rawlinson, et al.
0

We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can't be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to Sabour et al (2017) on MNIST, and then test classification accuracy on affNIST using an SVM layer. Accuracy is improved from benchmark 79

READ FULL TEXT

page 2

page 3

page 4

research
09/22/2022

Capsule Network based Contrastive Learning of Unsupervised Visual Representations

Capsule Networks have shown tremendous advancement in the past decade, o...
research
08/20/2020

iCaps: An Interpretable Classifier via Disentangled Capsule Networks

We propose an interpretable Capsule Network, iCaps, for image classifica...
research
08/04/2019

Building Deep, Equivariant Capsule Networks

Capsule networks are constrained by their, relative, inability to deeper...
research
03/21/2022

HP-Capsule: Unsupervised Face Part Discovery by Hierarchical Parsing Capsule Network

Capsule networks are designed to present the objects by a set of parts a...
research
03/29/2022

ME-CapsNet: A Multi-Enhanced Capsule Networks with Routing Mechanism

Convolutional Neural Networks need the construction of informative featu...
research
08/22/2019

Image Colorization By Capsule Networks

In this paper, a simple topology of Capsule Network (CapsNet) is investi...
research
05/18/2018

Siamese Capsule Networks

Capsule Networks have shown encouraging results on defacto benchmark com...

Please sign up or login with your details

Forgot password? Click here to reset