Dual Pattern Learning Networks by Empirical Dual Prediction Risk Minimization

06/11/2018
by   Haimin Zhang, et al.
2

Motivated by the observation that humans can learn patterns from two given images at one time, we propose a dual pattern learning network architecture in this paper. Unlike conventional networks, the proposed architecture has two input branches and two loss functions. Instead of minimizing the empirical risk of a given dataset, dual pattern learning networks is trained by minimizing the empirical dual prediction loss. We show that this can improve the performance for single image classification. This architecture forces the network to learn discriminative class-specific features by analyzing and comparing two input images. In addition, the dual input structure allows the network to have a considerably large number of image pairs, which can help address the overfitting issue due to limited training data. Moreover, we propose to associate each input branch with a random interest value for learning corresponding image during training. This method can be seen as a stochastic regularization technique, and can further lead to generalization performance improvement. State-of-the-art deep networks can be adapted to dual pattern learning networks without increasing the same number of parameters. Extensive experiments on CIFAR-10, CIFAR- 100, FI-8, Google commands dataset, and MNIST demonstrate that our DPLNets exhibit better performance than original networks. The experimental results on subsets of CIFAR- 10, CIFAR-100, and MNIST demonstrate that dual pattern learning networks have good generalization performance on small datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2015

Blockout: Dynamic Model Selection for Hierarchical Deep Networks

Most deep architectures for image classification--even those that are tr...
research
02/09/2020

Improving Neural Network Learning Through Dual Variable Learning Rates

This paper introduces and evaluates a novel training method for neural n...
research
08/22/2015

StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity

Deep neural networks is a branch in machine learning that has seen a met...
research
06/27/2020

ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image Classification

Despite achieving state-of-the-art performance, deep learning methods ge...
research
06/02/2023

Towards Sustainable Learning: Coresets for Data-efficient Deep Learning

To improve the efficiency and sustainability of learning deep models, we...
research
02/16/2022

Measuring Unintended Memorisation of Unique Private Features in Neural Networks

Neural networks pose a privacy risk to training data due to their propen...
research
04/07/2020

Increasing the Inference and Learning Speed of Tsetlin Machines with Clause Indexing

The Tsetlin Machine (TM) is a machine learning algorithm founded on the ...

Please sign up or login with your details

Forgot password? Click here to reset