SEVEN: Deep Semi-supervised Verification Networks

06/12/2017 ∙ by Vahid Noroozi, et al. ∙ Lehigh University Northwestern University University of Illinois at Chicago 0

Verification determines whether two samples belong to the same class or not, and has important applications such as face and fingerprint verification, where thousands or millions of categories are present but each category has scarce labeled examples, presenting two major challenges for existing deep learning models. We propose a deep semi-supervised model named SEmi-supervised VErification Network (SEVEN) to address these challenges. The model consists of two complementary components. The generative component addresses the lack of supervision within each category by learning general salient structures from a large amount of data across categories. The discriminative component exploits the learned general features to mitigate the lack of supervision within categories, and also directs the generative component to find more informative structures of the whole data manifold. The two components are tied together in SEVEN to allow an end-to-end training of the two components. Extensive experiments on four verification tasks demonstrate that SEVEN significantly outperforms other state-of-the-art deep semi-supervised techniques when labeled data are in short supply. Furthermore, SEVEN is competitive with fully supervised baselines trained with a larger amount of labeled data. It indicates the importance of the generative component in SEVEN.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Different from traditional classification tasks, the goal of verification tasks is to determine whether two samples belong to the same class or not, without predicting the class directly [Chopra et al.2005]. Verification tasks arise from applications where thousands or millions of classes are present with very few samples within each category (in some cases just one). For example, in face and signature verification, faces and signatures of a person are considered to belong to a class. While there can be millions of persons in the database, very few examples for each person are available. In such applications, it is also necessary to handle new classes without the need to train the model from the scratch. It is not trivial to address such challenges with traditional classification techniques.

Motivated by the impressive performance brought by deep networks to many machine learning tasks 

[LeCun et al.2015, Bahaadini et al.2017, Zheng et al.2017], we pursue a deep learning model to improve existing verification models. However, deep networks require a large amount of labeled data for each class, which are not readily available in verification. There are semi-supervised training methods for deep network to tap on the large amount of unlabeled data. These semi-supervised methods usually have separate learning stages [Sun et al.2017, Nair and Hinton2010]. They first pre-train a model using unlabeled data and then fine-tune the model with labeled data to fit the target tasks. Such two-phase methods are not suitable for verification. First, the large number of classes and the lack of data (be it labeled or unlabeled) within each category prohibit us from any form of within class pre-training and fine-tuning. Second, if we pool data from all categories for pre-training, the learned features are general but not specific towards each category, and the later fine-tuning within each category may not be able to correct such bias due to the lack of labeled data.

To address such challenges, we propose Deep SEmi-supervised VErification Networks (SEVEN) that consists of a generative and a discriminative component to learn general and category specific representations from both unlabeled and labeled data simultaneously. We cross the category barrier and pool unlabeled data from all categories to learn salient structures of the data manifold. The hope is that by tapping on the large amount of unlabeled data, the structures that are shared by all categories can be learned for verification.

SEVEN then adapts the general structures to each category by attaching the generative component to the discriminative component that uses the labeled data to learn category-specific features. In this sense, the generative component works as a regularizer for the discriminative component, and aids in exploiting the information hidden in the unlabeled data. On the other hand, as the discriminative component depends on the structures learned by the generative component, it is desirable to inform the generative component about the subspace that is beneficial to the final verification tasks. Towards this end, instead of training the two components separately or sequentially, SEVEN chooses to train the two components simultaneously and allow the generative component to learn more informative general features.

We evaluate SEVEN on four datasets and compare it to four state-of-the-art semi-supervised and supervised algorithms. Experimental results demonstrate that SEVEN outperforms all the baselines in terms of accuracy. Furthermore, it has shown that by using very small amount of labeled examples, SEVEN reaches competitive performance with the supervised baselines trained on a significantly larger set of labeled data.

The rest of this paper is organized as follows. In Section 2 we give an overview of the related works. In Section 3 we present SEVEN in detail. Section 4 gives the experimental evaluation and analysis of the proposed model, followed by a conclusion.

2 Related Work

SEVEN can serve as a metric learning algorithm that is commonly employed in verification. The goal of metric learning is to learn a distance metric such that samples in any negative pair are far away and those in any positive pair are close. Many of the existing approaches [Sun et al.2017, Oh Song et al.2016, Zagoruyko and Komodakis2015, Hu et al.2014] learn a linear or nonlinear transformation that maps the data to a new space where the distance metric satisfies the above requirements. However, these methods do not address the large number of categories with scarce supervision information.

One of the earliest works in neural network-based verification is proposed by Bromley et al. for signature verification

[Bromley et al.1993]. The proposed architecture, named Siamese networks, uses a contrastive objective function to learn a distance metric with Convnets. Similar approaches are employed for many other tasks such as face verification or re-identification [Koch2015, Sun et al.2014, Wolf2014]. It is worthy to mention that all these works are supervised and do not exploit unlabeled data.

Great interest in deep semi-supervised learning has emerged in applications where unlabeled data are abundant but obtaining labeled data is expensive or not feasible

[Li et al.2016, Hoffer and Ailon2016, Rasmus et al.2015, Kingma et al.2014, Lee2013]. However, most of such approaches are designed for classification. To the best of our knowledge, there exists no deep semi-supervised learning to address the above two challenges in verification.

A key difference between SEVEN and most of the previous semi-supervised deep networks lies in the way that unlabeled and labeled data are exploited. Lee [Lee2013]

has presented a semi-supervised approach for classification tasks called Pseudo-Label based on self-training scheme. It predicts the labels of unlabeled samples by training the model with the available labeled samples. Then they bootstrap the model with the highly confident labeled samples. This approach is prone to error because it may reinforce wrong predictions especially in problems with low confident estimation.

A more common semi-supervised approach is to pre-train a model with unlabeled samples and then the learned model is fine-tuned using the labeled samples. For example, [Nair and Hinton2010]

have pre-trained a Restricted Boltzmann Machine (RBM) with noisy rectified linear units (NReLU) in the hidden layers, then they used the learned weights to initialize and train a siamese network

[Chopra et al.2005] in a supervised way. The problem with pre-training based approaches is that the supervised part of the algorithm can ignore or lose what the model has learned in the unsupervised step. Another problem with pre-training based approaches is that they still need enough labeled examples for the fine-tunning step.

Recently, some works have tried to alleviate such problems by performing the learning process from all the labeled and unlabeled data in a joint manner for classification tasks [Li et al.2016, Hoffer and Ailon2016, Maaløe et al.2016, Rasmus et al.2015]. They make the unsupervised model involved in the learning as a regularizer for the supervised model. It should be considered that all such techniques are designed for classification tasks and can not handle the cases mentioned in the introduction such as the few samples per each class and the high number of classes.

Another line of work that handles a large number of categories is extreme multi-label learning [Xie and Philip2017]. The most popular assumption is that all classes have sufficient amount of labeled data, and this is clearly different from our problem setting. Recently, there are methods focusing on predicting the tail labels [Jain et al.2016], but they are proposed for traditional classification task and can not handle new classes in the test data.

3 Proposed Model

3.1 Problem Formulation

The training set is represented as , where is a pair of training samples , is the total number of training pairs consisting of labeled and unlabeled pairs. The relation, i.e., label set denoted by specifies the relation between the samples of each pair. A positive relation indicates that two samples of the pair belong to the same class and a negative relation indicates the opposite. The relations for the unlabeled pairs are unknown.

Our goal is to learn a nonlinear function parameterized by that predicts the relation between the two data samples and . In other words, function verifies if two samples are similar or not.

We define based on the distance of and estimated by a metric distance function as:


where is the metric distance function and threshold specifies the maximum distance that samples of a class are allowed to have. We define a nonlinear embedding function that projects data to a new feature space and is the Euclidean distance between and in the new space. An arbitrary distance function can be also used instead of the Euclidean distance.

3.2 Model Description

Our proposed model consists of discriminative and generative components. The model learns a non-linear function for each component. For the discriminative component, the nonlinear embedding function is learned to yield “discriminative” and “informative” representation. In a discriminative feature space, similar samples are mapped close to each other while dissimilar pairs are far from each other. Such property is crucial for a good metric function. The generative component of the model is designed to exploit the information hidden in the unlabeled data. The desired representation should keep the salient structures shared by all categories as much as possible. We define a probabilistic framework of the problem along with the discriminative and generative modelings of our algorithm.

The conditional probability distribution of the relation variable

given the pair can be estimated as:


which can be written as the following.


Here we use a function to map the distance between samples to . However, any monotonic increasing function which gives and can be also used for this purpose.

We define as the ground truth distribution to be approximated by in (3) as if , and otherwise. In the rest of the paper, the conditional distributions and are denoted by and , respectively. Due to the probabilistic nature of such distributions, we approximate with

by minimizing the Kullback-Leibler divergence between them and introduce the following discriminative loss function

defined over all the labeled pairs as:


where denotes the discriminative loss for the pair, and denotes the KL-divergence between and . can be substituted by where specifies the entropy of , and defines the cross entropy between and . Considering that the loss function is optimized with a gradient based optimization approach and is a constant with respect to the the parameters, we simplify the discriminative loss function as:


where is the identity function. The loss function becomes equivalent to the the cross entropy over and . It penalizes large distance (similarity) between samples from the same (different) class to make the new space discriminative. attains its minimum when over all the labeled pairs.

To alleviate the insufficiency of the unlabeled data for verification task, through generative modeling, we encourage the embedding function to learn the salient structures shared by all categories. We define a nonlinear function parametrized by to project back the samples from new representation obtained from to the original feature space.

The generative loss for the pair is defined as the reconstruction error between the original input and the corresponding reconstructed output as:


where indicates the reconstruction of the input of the and is denoted by . The generative loss function , over all pairs including labeled and unlabeled, is defined as:


We combine the generative and discriminative components into a unified objective function and write the optimization problem of SEVEN as:


where and are the regularization terms on the parameters of the functions and . The parameter controls the effect of this regularization, parameter controls the trade off between the discriminative and generative objectives.

Figure 1: The overall architecture of SEVEN.

3.3 Model Architecture and Optimization

We choose deep neural networks for parameterizing and . The schematic representation of SEVEN is illustrated in Figure 1. The input pair is given to two neural networks denoted by and with shared parameters . They represent the discriminative component of the SEVEN (nonlinear embedding function ). They project the input samples to the discriminative feature space. A layer, denoted by , is added on top of the networks and that estimates the distance between the two samples of the input pair in the discriminative space.

It can be considered as the metric distance function which networks and are supposed to learn. The final layers of and are connected to two other subnetworks denoted by and in Figure 1 with shared parameters . They model the generative component of SEVEN (). They project back the samples to the original space. In other words, they can be considered as decoders for the encoders and . The outputs of and shown as and are the reconstructions of the corresponding inputs and .

Subnetworks and

are ConvNets built with convolutional and max-pooling layers.

and are made with transposed convolutional and upsampling layers which perform the reverse operations of convolutional and max-pooling layers, respectively. More detail of the transposed convolutional layer can be found in [Dumoulin and Visin2016]. The complete specifications of the models are presented in Table 1.

The whole model is trained using backpropagation with respect to the objective function in Equation

8. Given a set of

pairs, we optimize the model through an adaptive version of gradient descent called RMSprop

[Dauphin et al.2015] over shuffled mini-batches.

We employ -regularization and dropout [Srivastava et al.2014]

strategy to the convolutional and fully connected layers of the subnetworks to prevent overfitting. Batch normalization

[Ioffe and Szegedy2015] technique is also applied after each convolutional layer to normalize the output of each layer. It can improve the performance in some cases. The training procedure of SEVEN is illustrated in Algorithm 1.

Input: Training set: , label set , number of iterations , and batch size .
Output: Model’s parameters:
;  // number of batches Randomly split the training set X into batches;
for  do
       for  do
             Feedforward propagation of the batch;
             Calculate according to Equation (8);
             Estimate gradients by back propagation;
             Calculate using RMSProp;
       end for
end for
return ;
Algorithm 1 Training procedure of SEVEN

4 Experiments

4.1 Datasets

We evaluate the proposed algorithm on the following four datasets.

MNIST: It is a dataset of grayscale images of handwritten digits from to . We use the original split of for the training and test sets. A uniform random noise of is added to each pixel to make it noisy and more challenging.

US Postal Service (USPS) [Hull1994]: It is a dataset of handwritten digits automatically scanned from envelopes by the US Postal Service. All images are normalized to grayscale images. We selected randomly of the images for the training set.

Labeled Faces in the Wild (LFW) [Huang et al.2012]: It is a database of face images that contains positive and negative pairs in the training set, and positive and negative pairs in the test set. All images are resized to .

BiosecurID-SONOF (SONOF) [Galbally et al.2015]: We use a subset of this dataset comprising signatures collected from users, each user has signatures. Signature images are normalized and converted to grayscale images. We divided the users randomly into for the training and test purposes.

In SONOF and LFW datasets, classes in the training and test samples are disjoint, while in MNIST and USPS classes are common between test and train sets. The samples of LFW are already in the form of pairs. For other datasets, we create the pairs by first splitting samples into two distinct sets for the training and test. We split the train set randomly into labeled and unlabeled samples. Then, each sample gets paired with two other samples randomly. One sample is selected from the same class to form a positive pair, and another one from a different class to form a negative pair.

Network Network
Conv () Dense Layer
ReLu ReLu
Max-pooling Reshape layer
Dropout(0.5) Upsampling
Conv () TransConv ()
ReLu ReLu
Max-pooling Dropout(0.5)
Dropout(0.5) Upsampling
Dense Layer () TransConv ()
ReLu Sigmoid
Network Network
Conv () Dense Layer
BN-ReLu ReLu
Dropout(0.5) Reshape layer
Max-pooling TransConv ()
Dropout(0.5) BN-ReLu
Conv () Dropout(0.5)
BN-ReLu Upsampling
Max-pooling TransConv ()
Dropout(0.5) BN-ReLu
Conv () Dropout(0.5)
BN-ReLu Upsampling
Dropout(0.5) TransConv ()
Dense Layer () BN-Sigmoid
ReLu Dropout(0.5)
Table 1:

Models specifications for different datasets. BN: batch normalization, ReLu: rectified linear activation function, Conv: convolutional layer, TransConv: transposed convolutional layer, Upsampling: Upsampling layer, Dense Layer: fully connected layer, and Max-pooling: max-pooling layer.

4.2 Baselines

We compare the performance of SEVEN with the following baselines. It should be considered that we can not compare SEVEN with classification techniques because they are not usually designed to handle new classes in the test data which happens in verification applications. Since there are no other deep semi-supervised works for verification tasks, we adopt the common deep semi-supervised techniques to verification networks as our baselines.

Discriminative Deep Metric Learning (DDML)  [Hu et al.2014]: They developed a deep neural network that learns a set of hierarchical transformations to project pairs into a common space by using a contrastive loss function. It is a supervised approach and can not use unlabeled data.

Pseudo-Label [Lee2013]: It is a semi-supervised approach for training deep neural networks. It initially trains a supervised model with the labeled samples. Then it labels the unlabeled samples with the current trained model before each iteration, and use the high confidence ones along with the labeled samples for training in the next iteration. We followed the same approach for training a siamese network [Bell and Bala2015] to extend their approach to the verification tasks.

Convolutional Autoencoder + Siamese Network (PreConvSia): We pre-train a siamese network [Bell and Bala2015, Chopra et al.2005]

with an convolutional autoencoder model

[Masci et al.2011]. Then we fine-tune the network with labeled pairs. The network uses ConvNets as the underlying network for the modeling.

Autoencoder + Siamese Network (PreAutoSia): It is similar to PreConvSia, but uses MLP as the underlying network for the modeling. It is significantly faster in training compared to PreConvSia.

Principle Component Analysis (PCA): We use PCA as an unsupervised feature learning technique. The distance between samples in the new space learned by PCA indicates their relations. The threshold on the distance is selected for each dataset separately based on the performance on the training data.

max width=0.97 Dataset LFW SONOF # of labeled pairs SEVEN 61.2 64.1 65.7 66.3 67.0 72.7 74.6 79.3 83.1 84.1 PCA - - - - - - - - - - DDML 71.1 86.1 Pseudo-label PreConvSia PreAutoSia

Table 2: Performance of different methods on LFW and SONOF in terms of accuracy.

max width=0.97 Dataset MNIST USPS # of labeled pairs SEVEN 75.5 76.9 79.8 84.8 76.2 77.3 80.2 80.7 PCA - - - - - 65.84 - - - - - 70.96 DDML Pseudo-label 93.3 PreConvSia 90.8 97.2 82.9 PreAutoSia

Table 3: Performance of different methods on MNIST and USPS in terms of accuracy.

4.3 Experimental Settings

The architectures of SEVEN for all datasets are presented in Table 1. All the parameters of SEVEN and also other baselines are selected based on a validation on a randomly selected subset of the training data. The -regularization parameter is selected from for each dataset separately. The parameter that controls the trade-off between generative and discriminative objectives is selected from . It is set to , , , and for MNIST, LFW, USPS and SONOF, respectively. Parameter is set to for all the four datasets.

All the neural network models are trained for epochs. The pre-training is also performed for epochs for the baselines which require pre-training. RMSProp optimizer is used for the training of all the neural networks with the default value recommended in the original paper.

4.4 Performance Evaluation

We report the performance in terms of accuracy which is the number of pairs in the test set verified correctly divided by the total number of pairs in the test set. The performance of SEVEN and all baselines are presented in Tables 2 and 3. The results are reported for different number of labeled pairs and the best accuracy for each case is depicted in bold. The last column indicates the case where all labeled training pairs are used. PCA is a fully unsupervised method, thus one performance is reported for each dataset.

As it can be seen from the tables, SEVEN outperforms other baselines in cases where a limited number of labeled pairs are used and the differences in performance are more significant where the number of labeled pairs is lower, and thus SEVEN can address the scarcity of labeled data better.

(b) LFW
(c) USPS
Figure 2: The accuracy of SEVEN for different values of parameter for (a) MNIST, (b) LFW, (c) USPS and (d) SONOF.

Algorithm DDML can give good performance when we have enough labeled data but its performance is significantly lower compared to SEVEN in cases with few labeled samples. DDML does not use the unlabeled data while other baselines benefit from the information hidden in the unlabeled data. By increasing the number of labeled pairs, the difference in accuracy decreases.

SEVEN outperforms all the semi-supervised baselines. One of the main advantages of SEVEN over other semi-supervised methods is that they perform supervised step after pre-training with unlabeled data is finished. This may cancel out some of the learned information from unlabeled data through a supervised process. There is no guarantee that the supervised process can benefit from the unsupervised learning

[Rasmus et al.2015]. Among the semi-supervised baselines, Pseudo-Label not only gives worse results compared to SEVEN, but also it shows lower performance than PreConvSia and PreAutoSia in many cases. It can be related to the noise and error in estimating the labels for unlabeled pairs.

4.5 Model Analysis

We perform some experiments to analyze the effect of the different components of SEVEN. The performances of different variants of SEVEN are given in Table 4. The number of labeled pairs for each dataset is indicated in front of the name of the dataset. DisSEVEN indicates SEVEN with in Eq. (8) which disables the networks and the generative aspect of the model. This variant does not consider the unlabeled data during the learning. GenSEVEN corresponds to a model that does not have the discriminate component. In other words, it does not have the contrastive layer and does not use the label information. SEVEN indicates the full variant of SEVEN with both generative and discriminative components. The variant SEVEN (MLP) is similar to the regular SEVEN, except that it uses fully connected layers instead of convolutional and transposed convolutional layers.

Among all the different variants, full SEVEN gives the best performance. It shows the effectiveness of both the generative and discriminative components. It also verifies the effectiveness of using the information hidden in the unlabeled data. The results show that the discriminative component has the broader impact compared to the generative component. SEVEN (MLP) gives weaker performance compared to SEVEN. It is mainly because of the capabilities of convolutional layers in modeling image data as it has also been shown by ConvNets in image processing applications.

4.6 Parameter Sensitivity

We analyze the effect of the parameter in Equation  (8) on the performance of SEVEN on all the four datasets. Parameter of SEVEN controls the trade-off between the generative and discriminative aspects of the model. In Figure 2 the performance of SEVEN for different values of is plotted. For each dataset, the performance is plotted for three different values of (number of labeled pairs). There exists a trade-off between the two generative and discriminative aspects of SEVEN on all of the four datasets. As it can be seen, the optimum value of this parameter is dependent to the dataset and also to the ratio of labeled data to some extent.

max width=0.5 Method MNIST (120) LFW (440) USPS (80) SONOF (320) DisSEVEN GenSEVEN SEVEN 79.8 64.1 77.3 74.6 SEVEN (MLP)

Table 4: Performance of SEVEN variants in accuracy.

5 Conclusion

Benefiting from the salient structures hidden in the unlabeled data and the ability of deep neural networks in nonlinear function approximation, we propose a semi-supervised deep SEmi-supervised VErification Network (SEVEN) for verification tasks. SEVEN benefits from both generative and discriminative modelings in a unified model. These two components are simultaneously trained which lead them to closely interact and influence each other. Extensive experiments demonstrate that SEVEN outperforms other state-of-the-art deep semi-supervised techniques in a wide spectrum of verification tasks. Furthermore, SEVEN shows competitive performance compared with fully supervised baselines that require a significantly larger amount of labeled data, indicating the important role of the generative component in SEVEN.


  • [Bahaadini et al.2017] Sara Bahaadini, Neda Rohani, Scott Coughlin, Michael Zevin, Vicky Kalogera, and Aggelos K Katsaggelos. Deep multi-view models for glitch classification. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2017.
  • [Bell and Bala2015] Sean Bell and Kavita Bala.

    Learning visual similarity for product design with convolutional neural networks.

    ACM Transactions on Graphics (TOG), 34(4):98, 2015.
  • [Bromley et al.1993] Jane Bromley, James W Bentz, Léon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Säckinger, and Roopak Shah. Signature verification using a “siamese” time delay neural network.

    International Journal of Pattern Recognition and Artificial Intelligence

    , 7(04):669–688, 1993.
  • [Chopra et al.2005] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR’05)

    , volume 1, pages 539–546. IEEE, 2005.
  • [Dauphin et al.2015] Yann Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for non-convex optimization. In Advances in Neural Information Processing Systems, pages 1504–1512, 2015.
  • [Dumoulin and Visin2016] Vincent Dumoulin and Francesco Visin. A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285, 2016.
  • [Galbally et al.2015] Javier Galbally, Moises Diaz-Cabrera, Miguel A Ferrer, Marta Gomez-Barrero, Aythami Morales, and Julian Fierrez. On-line signature recognition through the combination of real dynamic data and synthetically generated static data. Pattern Recognition, 48(9):2921–2934, 2015.
  • [Hoffer and Ailon2016] Elad Hoffer and Nir Ailon. Semi-supervised deep learning by metric embedding. arXiv preprint arXiv:1611.01449, 2016.
  • [Hu et al.2014] Junlin Hu, Jiwen Lu, and Yap-Peng Tan. Discriminative deep metric learning for face verification in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1875–1882, 2014.
  • [Huang et al.2012] Gary Huang, Marwan Mattar, Honglak Lee, and Erik G Learned-Miller. Learning to align from scratch. In Advances in Neural Information Processing Systems, pages 764–772, 2012.
  • [Hull1994] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994.
  • [Ioffe and Szegedy2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [Jain et al.2016] Himanshu Jain, Yashoteja Prabhu, and Manik Varma. Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications. In SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2016.
  • [Kingma et al.2014] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014.
  • [Koch2015] Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.
  • [LeCun et al.2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • [Lee2013] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning, volume 3, page 2, 2013.
  • [Li et al.2016] Yawei Li, Lizuo Jin, AK Qin, Changyin Sun, Yew Soon Ong, and Tong Cui. Semi-supervised auto-encoder based on manifold learning. In International Joint Conference on Neural Networks (IJCNN), pages 4032–4039, 2016.
  • [Maaløe et al.2016] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In International Conference on Machine Learning (ICML), pages 1445–1453, 2016.
  • [Masci et al.2011] Jonathan Masci, Ueli Meier, Dan Cireşan, and Jürgen Schmidhuber.

    Stacked convolutional auto-encoders for hierarchical feature extraction.

    Artificial Neural Networks and Machine Learning (ICANN), pages 52–59, 2011.
  • [Nair and Hinton2010] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning (ICML), pages 807–814, 2010.
  • [Oh Song et al.2016] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4004–4012, 2016.
  • [Rasmus et al.2015] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3546–3554, 2015.
  • [Srivastava et al.2014] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  • [Sun et al.2014] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identification-verification. In Advances in Neural Information Processing Systems, pages 1988–1996, 2014.
  • [Sun et al.2017] Yao Sun, Lejian Ren, Zhen Wei, Bin Liu, Yanlong Zhai, and Si Liu. A weakly-supervised method for makeup-invariant face verification. Pattern Recognition, 2017.
  • [Wolf2014] Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [Xie and Philip2017] Sihong Xie and S Yu Philip. Active zero-shot learning: a novel approach to extreme multi-labeled classification.

    International Journal of Data Science and Analytics

    , pages 1–10, 2017.
  • [Zagoruyko and Komodakis2015] Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4353–4361, 2015.
  • [Zheng et al.2017] Lei Zheng, Vahid Noroozi, and Philip S Yu. Joint deep modeling of users and items using reviews for recommendation. In 10th ACM International Conference on Web Search and Data Mining (WSDM), pages 425–434. ACM, 2017.