Training Group Orthogonal Neural Networks with Privileged Information

01/24/2017
by   Yunpeng Chen, et al.
0

Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs). In this paper, we consider how to use privileged information to promote inherent diversity of a single CNN model such that the model can learn better representations and offer stronger generalization ability. To this end, we propose a novel group orthogonal convolutional neural network (GoCNN) that learns untangled representations within each layer by exploiting provided privileged information and enhances representation diversity effectively. We take image classification as an example where image segmentation annotations are used as privileged information during the training process. Experiments on two benchmark datasets -- ImageNet and PASCAL VOC -- clearly demonstrate the strong generalization ability of our proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance of state-of-the-art ResNet-152 model by absolute value of 1.2 privileged information of 10 of GoCNN on utilizing available privileged knowledge to train better CNNs.

READ FULL TEXT
research
11/19/2019

Inter-layer Collision Networks

Deeper neural networks are hard to train. Inspired by the elastic collis...
research
06/03/2019

Deeply-supervised Knowledge Synergy

Convolutional Neural Networks (CNNs) have become deeper and more complic...
research
01/29/2019

Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification

Image classification is a challenging problem which aims to identify the...
research
03/29/2021

Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

In this paper, we propose Selective Output Smoothing Regularization, a n...
research
10/19/2021

Learning Equivariances and Partial Equivariances from Data

Group equivariant Convolutional Neural Networks (G-CNNs) constrain featu...
research
11/17/2019

Encouraging an Appropriate Representation Simplifies Training of Neural Networks

A common assumption about neural networks is that they can learn an appr...
research
03/24/2020

Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives

While the depth of modern Convolutional Neural Networks (CNNs) surpasses...

Please sign up or login with your details

Forgot password? Click here to reset