Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification

01/29/2019
by   Hao Ren, et al.
0

Image classification is a challenging problem which aims to identify the category of object in the image. In recent years, deep Convolutional Neural Networks (CNNs) have been applied to handle this task, and impressive improvement has been achieved. However, some research showed the output of CNNs can be easily altered by adding relatively small perturbations to the input image, such as modifying few pixels. Recently, Capsule Networks (CapsNets) are proposed, which can help eliminating this limitation. Experiments on MNIST dataset revealed that capsules can better characterize the features of object than CNNs. But it's hard to find a suitable quantitative method to compare the generalization ability of CNNs and CapsNets. In this paper, we propose a new image classification task called Top-2 classification to evaluate the generalization ability of CNNs and CapsNets. The models are trained on single label image samples same as the traditional image classification task. But in the test stage, we randomly concatenate two test image samples which contain different labels, and then use the trained models to predict the top-2 labels on the unseen newly-created two label image samples. This task can provide us precise quantitative results to compare the generalization ability of CNNs and CapsNets. Back to the CapsNet, because it uses Full Connectivity (FC) mechanism among all capsules, it requires many parameters. To reduce the number of parameters, we introduce the Parameter-Sharing (PS) mechanism between capsules. Experiments on five widely used benchmark image datasets demonstrate the method significantly reduces the number of parameters, without losing the effectiveness of extracting features. Further, on the Top-2 classification task, the proposed PS CapsNets obtain impressive higher accuracy compared to the traditional CNNs and FC CapsNets by a large margin.

READ FULL TEXT

page 8

page 11

page 12

page 13

page 14

page 15

page 16

page 17

research
11/18/2019

Improving the Robustness of Capsule Networks to Image Affine Transformations

Convolutional neural networks (CNNs) achieve translational invariance us...
research
03/23/2019

1D-Convolutional Capsule Network for Hyperspectral Image Classification

Recently, convolutional neural networks (CNNs) have achieved excellent p...
research
03/25/2020

What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective

Recent works have demonstrated that global covariance pooling (GCP) has ...
research
01/24/2017

Training Group Orthogonal Neural Networks with Privileged Information

Learning rich and diverse representations is critical for the performanc...
research
03/01/2023

Empowering Networks With Scale and Rotation Equivariance Using A Similarity Convolution

The translational equivariant nature of Convolutional Neural Networks (C...
research
05/10/2018

Dense and Diverse Capsule Networks: Making the Capsules Learn Better

Past few years have witnessed exponential growth of interest in deep lea...
research
05/11/2023

OneCAD: One Classifier for All image Datasets using multimodal learning

Vision-Transformers (ViTs) and Convolutional neural networks (CNNs) are ...

Please sign up or login with your details

Forgot password? Click here to reset