CoopSubNet: Cooperating Subnetwork for Data-Driven Regularization of Deep Networks under Limited Training Budgets

06/13/2019 ∙ by Riddhish Bhalodia, et al. ∙ THE UNIVERSITY OF UTAH 4

Deep networks are an integral part of the current machine learning paradigm. Their inherent ability to learn complex functional mappings between data and various target variables, while discovering hidden, task-driven features, makes them a powerful technology in a wide variety of applications. Nonetheless, the success of these networks typically relies on the availability of sufficient training data to optimize a large number of free parameters while avoiding overfitting, especially for networks with large capacity. In scenarios with limited training budgets, e.g., supervised tasks with limited labeled samples, several generic and/or task-specific regularization techniques, including data augmentation, have been applied to improve the generalization of deep networks.Typically such regularizations are introduced independently of that data or training scenario, and must therefore be tuned, tested, and modified to meet the needs of a particular network. In this paper, we propose a novel regularization framework that is driven by the population-level statistics of the feature space to be learned. The regularization is in the form of a cooperating subnetwork, which is an auto-encoder architecture attached to the feature space and trained in conjunction with the primary network. We introduce the architecture and training methodology and demonstrate the effectiveness of the proposed cooperative network-based regularization in a variety of tasks and architectures from the literature. Our code is freely available at <https://github.com/riddhishb/CoopSubNet>

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (NNs) have shown to be an exceptionally powerful tool for machine learning in a variety of domains and applications. The power of deep NNs is their ability to model complex

functional mappings between data and various target variables, while discovering or generating features that facilitate this task. This makes such networks an enabling technology in various tasks, including image classification [1, 2]

, pose estimation

[3, 4], object detection [5, 6], face alignment [4, 7, 8], and tracking [9, 10].

Deep networks are data-hungry models, and their performance relies heavily on the availability of large sets of training examples to learn complex relationships. This requirement for huge sets of observed examples stems from the large number of free parameters of these networks, that are typically necessary to capture complex relationships between often high-dimensional inputs (e.g., image textures and object geometries) and task-specific characteristics (e.g., classes of objects, positions, or pose). Thus, state-of-the-art networks typically have parameters in the order of millions. For example, the VGGNet [11], which identifies 1000 images categories, has 144 million parameters and requires 1.3 million training images with assigned categories (i.e., labeled data

in a supervised setting). Nonetheless, having millions of data samples, especially for supervised tasks where manual labeling and annotation are needed, is not always a viable option, especially for applications that entail high dimensional data (e.g., high resolution images, volumetric images) and/or data associated with significant monetary/computational costs (e.g., remote sensing, physical simulations). Ideally, deep networks would learn functional mappings from the data space to some unknown task-driven feature space while being generalizable to unseen samples, even with limited

training budget. That is, the network should ideally not overfit to the limited training data, and their performance should degrade gracefully or marginally when an ideal training data set is not available.

Regression problems (e.g., landmark detection [4, 7, 8], image segmentation [12, 13]) are more prone to labeled data scarcity because obtaining large-enough training data for regression tasks is more challenging compared to classification tasks. One very effective approach for overcoming the challenge of overfitting with limited training data is to augment the training set. Data augmentation is the process of generating new data samples using the available limited data and other auxiliary information and/or assumptions about the domain, such as natural or expected invariants. In certain applications, however, data cannot be trivially augmented (e.g., regression) and is defined by a particular acquisition process that would be hard to synthesize (e.g., medical and astronomical images). Furthermore, many applications rely heavily on data accuracy (e.g., tumor detection [14, 15, 16]), which may make it challenging to augment data while ensuring accuracy. Finally, the proposed method for regularization can be used to complement data augmentation strategies, and thus may be effective with or without these other methods.

In conventional optimization problems and solution spaces, regularization is a common strategy for ill-posed problems, where it serves to restrict or prioritize the admissible solution space. One way to regularize the solution space is to use a penalty on some property that may be expected from the solution a-priori (e.g., weight decay [17], smoothness of the learned mapping [18]). Such regularization methods are generic as they do not adapt to the task or the data at hand, and they often fail to capture population-level properties in the data and/or feature spaces that may violote, to some degree, the a-priori assumptions. Data-driven regularizers incorporate population statistics of the input data or the feature space and some very general assumptions about the complexity of those statistics, and they have been shown to outperform generic regularization methods, for instance using adaptive prior in Adaboost [19], for semi-supervised labeling task [20], for classification [21], and for limited data training [22]. In particular, data/features in high-dimensional spaces tend to concentrate in the vicinity of a latent manifold of a much lower dimensionality [18]

, which can be learned via principal component analysis (PCA)

[23] or more recently using auto-encoders [24, 18].

In this paper, we extend these previous observations on data-driven regularization and data modeling to deep neural networks, and propose a data-driven approach for learning network regularization that encodes population-level properties, as opposed to generic regularizers. Specifically, this paper systematically examines and analyzes the use of the latent features manifold to regularize a deep network. To this regard, we propose a novel architecture of two interacting sub-networks, a primary network that performs the main task (e.g., classification, regression), and a secondary cooperating subnetwork (CoopSubNet) that is an auto-encoder attached to one of the intermediate feature spaces of the primary network. These sub-networks acts cooperatively toward the given task. Being an auto-encoder, the secondary network learns a low-dimensional representation of the feature space, and cooperates with the primary network to enforce that the intermediate features adhere to a low-dimensional manifold.

2 Related Work

Limited data or low resource training models – the focus of this paper – is a crucial drawback of the current deep learning paradigm. Deep neural models are inherently data-hungry and, when faced with limited training budget they are prone to

overfit the data and generalize poorly. Literature has proposed several regularization models, network variants, and data augmentation schemes. A comprehensive literature review is beyond the scope of this paper. Here, we focus on the most closely related and relevant research to the proposed method.

The umbrella term regularization in the context of deep neural networks covers a wide range of different approaches to alleviate overfitting [25]. The class of methods dealing with penalty based regularization is of interest. This class includes classical approaches such as weight decay [26], and weight smoothing [17]. There have been several different regularization terms, e.g., [27, 28], but weight decay/ regularization on network weights remains the most widely used [25]. Most of these methods do not rely on data and hence have no information regarding the population variability of the features or the data. This allows them to be malleable to a variety of architectures and problems, but they fail to utilize the data/feature structure and statistics to improve generalization.

Another class of regularization methods modifies the training data (data augmentation). These modifications include adding noise to the input or features [29, 30], as well as image transformations [31]. Data augmentation usually relies on some modification to input data or features that is meant to capture invariant properties of the application (e.g., transformations that do not change the classification output). This requires some knowledge of the domain/application, and it tends to work well in many classification tasks, where invariants are easier to formulate/imagine. Nonetheless, augmentation becomes nontrivial, if not infeasible, when we consider regression problems where invariants may be harder to discover. Data augmentation techniques are prevalent in the literature and in practice, and have proven to be very effective where applicable [32]. However, in domains where the data acquisition is expensive and the data robustness is critical (medical [33] and astronomy [34]), generic data augmentation methods may fail and there is a need to devise data/task specific tactics to achieve effective data augmentation [35, 36]. Furthermore, even when data augmentation is feasible, its effectiveness may be enhanced by the application of some complementary regularization strategy.

Network-architecture-driven regularization is another class of regularization methods, and this paper proposes a network-based framework to achieve data-driven regularization. Such regularization includes methods that use randomness in architecture (e.g., Dropout [37] and Dropconnect [38]), modify layer operations (e.g., pooling and maxout units [39]), and apply architectural modification (e.g., residual nets [1] and skip-connections [40]). All of these methods rely on data independent metrics to improve generalization and ignore population-level variations in the data and feature space of the given primary task.

Some of the earlier work hint at low-dimensional representation of the features in deep learning [27], and a recent method, LDMNet [22], explores a similar idea of low-dimensional manifold based regularization. Nonetheless, LDMNet focuses on geometry driven representation of the joint manifold of features and data spaces that is specifically constructed for images. It further relies on the assumption of data coming from a set of low-dimensional manifolds. These assumptions limit the application of LDMNet to image classification problems, whereas we do not make any such assumptions about the data and the proposed regularization framework can be applied to any network architecture and learning task.

Another relevant advance is the advent of interacting networks - where one network is the adversary of the other, and the optimization seeks a solution to a min-max problem, as in generative adversarial networks (GANs) [41]. Some domain adaptation methods [42] also use interacting networks that ensure the consistency of latent features across data domains. Likewise, the TL-network trains two networks with different data inputs to share a latent space [43].

In this paper, we propose a cooperating subnetwork (CoopSubNet) that is trained in conjunction with the primary neural network. This subnetwork interacts in an unsupervised manner to enforce a penalty, or soft constraint

, on the primary network’s intermediate features, which ensures they lie on or near a low-dimensional latent manifold. The reconstruction loss of the CoopSubNet is minimized along with the primary loss function, and hence the networks are

cooperating. The CoopSubNet provides a generic framework for a data-driven regularization for applications entailing limited training budget.

3 Methods

This paper deals with regularization methods that restrict a network or place penalties on the weights of a network in order to curtail overfitting and improve generalization. The hypothesis behind this work is that directly restricting the degrees of freedom in a network (e.g. bottleneck) or imposing penalties directly on the weights of a network not only avoid overfitting but also interfere with the ability of the network to learn or converge to an effective solution. As an alternative, we propose a

cooperating subnetwork (CoopSubNet) as a general, data-driven, regularization method for training deep networks under limited training data budgets. A CoopSubNet is attached to the primary network to introduce a soft penalty that encourages the internal features of the primary network to lie close to a low-dimensional manifold. A CoopSubNet can regularize a variety of primary network architectures and tasks and is added only during the training phase and therefore does not introduce any computational overhead during inference or testing. In this section, we start by introducing the CoopSubNet formulation, followed by the network training procedure. In the results section, we demonstrate the use of CoopSubNet applied to different tasks and network architectures and discuss alternative approach to cooperative networks (e.g., hard as opposed to soft constraints).

Figure 1: CoopSubNet schematic diagram: A primary network (shown in the green box) comprises two subnetworks, a feature extraction subnetwork that learns a task/data-dependent functional mapping between the data space to a latent feature space, and a output subnetwork

(e.g., classifier, regressor) that maps the feature space to the task-specific output. A CoopSubNet (shown in the orange box), with an auto-encoder architecture, is attached to the feature vector

of the primary network to produce the reconstruction . The bottleneck of the CoopSubNet, the vector , defines the intrinsic dimension of the latent feature manifold.

3.1 Cooperating Subnetworks

The proposed cooperating subnetwork is illustrated in Figure 1. The figure shows two interacting networks, the primary network (green) that conducts the main learning task (e.g., classification, regression, segmentation), and the CoopSubNet as a secondary network (orange) that simultaneously learns a data-driven regularization. The specific architecture of the primary network is task-dependent, but can be implicitly divided into two parts: (1) feature extraction subnetwork that learns a functional mapping from the data space to a hidden, task-dependent feature space, and (2) an output subnetwork that maps the feature space to the final output. The CoopSubNet can be seen as a regularizer on the feature space, agnostic to the architecture of the primary network. To learn the underlying feature distribution or manifold, we train the CoopSubNet as a bottleneck auto-encoder. This architecture alleviates overfitting by forcing or encouraging the learned features to have a statistical structure that is well represented by a low-dimensional, nonlinear model [44].

Let be a training data set with samples, where is a training data point that lives in a dimensional data space, and is the corresponding target label/vector/matrix that lives in an dimensional output space. The primary network is parameterized by . defines the functional mapping that maps a data point to a feature vector in an dimensional feature space. defines the mapping to the output space. The primary network jointly learns the two mappings using the task-dependent primary loss . The CoopSubNet is attached to the output layer of the feature subnetwork and is parameterized by . represents the parameters of the encoder that maps the primary network’s feature space to a latent vector in an -dimensional latent manifold, where . defines the decoder that maps latent vectors back to the feature space (produces reconstructed feature

). The CoopSubNet introduces a soft penalty to the primary loss via adding a feature reconstruction loss as a regularizer. We first experimented with a regular auto-encoder loss using

reconstruction loss, but we found there was a problem. In contrast to a stand-alone auto-encoder, the CoopSubNet is trained in conjunction (i.e., at the same time) as the primary network, therefore simply scaling down the variances of the feature space will trivially drive the reconstruction-error loss to zero. This is undesirable, because the same effect can be achieved more easily with a classical data-independent

-regularizer. To avoid this trap, we use a relative loss, which normalizes the error by the norm of the input features. In terms of equations, the final loss of the composite network (i.e., the primary network with a CoopSubNet) is formulated as:

(1)

where,

(2)
(3)

In particular, the normalization by in the last equation (i.e., the relative reconstruction error) ensures that the optimizer does not simply drive towards zero.

3.2 Network Training

Training procedure for CoopSubNet consists of two steps. The first step (termed “burn-in” phase) trains only the primary network, i.e., we set which corresponds to disabling the CoopSubNet. This is necessary as these burn-in iterations allow for the primary network to learn a preliminary representation of the feature space to bootstrap the training of the CoopSubNet. In the second step, we jointly train both the primary and the CoopSubNet by setting suitable parameters for

. The hyperparameters for the CoopSubNet are the regularization weight

, the number of burn-in iterations, the choice for the bottleneck size (), and the choice of where to attach the CoopSubNet, i.e. how to separate the primary network into feature extractor subnetwork and output subnetwork. These hyperparameters are discussed in Section 4.4.

Figure 2: MNIST–CoopSubNet/CoopSubNet-L1: Cooperating subnetwork architecture for MNIST classification. The green box represents the primary network, which uses the cross entropy loss as primary network loss . The CoopSubNet (i.e., cooperating auto-encoder shown in the orange box) is attached to the feature vector producing the reconstruction .

4 Results

Reduced Data 0.5 % 1 % 10 % 50 % 100 %
Baseline 90.2 0.41 93.2 0.28 97.9 0.05 98.9 0.09 99.3 0.02
CoopSubNet 93.7 0.42 95.8 0.27 98.2 0.06 98.9 0.07 99.4 0.00
Dropout 90.5 0.02 93.5 0.01 97.8 0.00 99.0 0.00 99.2 0.00
L2-reg 90.4 0.32 94.1 0.21 97.4 0.03 98.0 0.01 99.0 0.00
HardCon 90.5 0.38 93.5 0.30 98.1 0.02 98.9 0.00 99.2 0.00
Table 1: Reduced MNIST classification accuracy results: CoopSubNet and HardCon bottlenecks are set to and for and training data, respectively.

The proposed CoopSubNet framework is effective in improving supervised training accuracy in situations with limited data budget. We demonstrate this on two tasks, classification and regression. In our experiments we compare with the following methods: (1) Baseline is a task-dependent architecture that is trained without any regularization. (2) CoopSubNet is an auto-encoder with a predefined bottleneck attached to the feature layer of the primary network. The composite network is trained based on the loss in Eq. 1. (3) Dropout is a task-dependent architecture that include dropout layers [37]. (4) L2-reg uses -norm on the primary network parameters as a generic weight decay regularizer [17, 26]. (5) HardCon enforces the manifold assumption as a hard constraint by restricting the dimensionality of the bottleneck directly in the primary network, i.e., force features to lie on a low-dimensional manifold. Figure 3 illustrates adding such a hard bottleneck to the primary network for MNIST classification. We evaluate this hard-constraint variant for each of our experiments to provide insights about how enforcing a hard architectural constraint performs in comparison to a soft penalty as enforced by the CoopSubNet. In all experiments we use the same bottleneck dimension, , for both CoopSubNet and HardCon. We initialize all the networks considered for comparison (including CoopSubNet variants) with Xavier’s initialization [45]

, use batch normalization on all the convolution layers

[46], and train using the Adam optimizer [47].

4.1 Reduced MNIST

We begin our experimental evaluation with a classical image classification problem. Specifically, we use a reduced version of the MNIST dataset (the full dataset contains 60,000 training and 10,000 testing images of handwritten digits). The “reductions” consists in randomly selecting a subset of the original MNIST data, simulating conditions with limited data budgets. The proposed cooperating subnetworks architecture is described in Figure 2 and we use the training procedure described in section 3.2. We demonstrate the efficacy of CoopSubNet trained on 0.5%, 1%, 10% and 50% of the total training data size, and evaluate the testing accuracy on the entire testing data. The bottlenecks for the CoopSubNet were chosen using cross-validation, and are set to and for and training data, respectively. The results are summarized in Table 1

using the % of classification accuracy on the test set. Each experiment was repeated with 5 randomly drawn training subsets; we report the means and standard deviations. In all of the experiments, the CoopSubNet learns to reconstruct well, with relative reconstruction loss approximately 1% (on entire testing data). The proposed architecture (CoopSubNet) shows significant improvement over other methods especially with limited amounts of training data, in particular

and of the original training set. With more training data, the benefits of using CoopSubNet diminish, however, the results with CoopSubNet are still on par or even slightly better than with other methods.

Figure 3: MNIST–HardCon: Hard constraint (HardCon) network architecture variant for MNIST classification. The dimension of the latent space acts as the hard constraint. We use the same for CoopSubNet and HardCon versions in all of our comparisons.

Let us discuss the comparison of CoopSubNet and the hard-constraint variant (HardCon), which forces the features to lie on a low-dimensional manifold exactly. The HardCon version corresponds to changes of network architecture, while CoopSubNet correspond to regularization. The results in Table 1 indicate the soft constraint in CoopSubNet leads to better generalization than the hard constraint. We further study this in Table 2, using networks trained on of the training data with latent space dimensions set to: 1, 4 and 16. We can see that if the dimension of the bottleneck is set too low, HardCon’s accuracy reduces to random chance, while CoopSubNet still perform well. At , the HardCon version starts to work, but CoopSubNet still produce better accuracy.

Bottleneck () HardCon CoopSubNet
1 9.8 94.7
4 9.8 94.9
16 93.1 95.1
Table 2: HardCon versus CoopSubNet: Results on reduced MNIST data trained with 1% of the training data using three different bottleneck/latent dimensionality. With a very constrained bottleneck, HardCon does not learn the underlying feature manifold and drastically affect the classification performance of the primary network.
Figure 4: Landmark detection qualitative results: The landmark detection results on three different test images. The points marked in redred are the ground truth landmarks, and the points in blueblue are the ones predicted by different networks.
Figure 5: Landmark detection – CoopSubNet/CoopSubNet-L1: Cooperating subnetwork architecture for the landmark detection task. The primary loss is an difference between network estimated landmarks positions and the ground truth ones.
Figure 6: Nuclei estimation – CoopSubNet/CoopSubNet-L1:

Cooperating subnetwork architecture for cell nuclei estimation. The primary network predicts a probability map, and the primary loss is an

difference between the estimated probability map and the ground truth nuclei map.
Figure 7: Nuclei estimation results on two different test samples. A zoom-in view of the red highlighted region in the second row is shown in the third row.

4.2 Facial Landmark Detection

Detecting facial landmarks in images is an example of a regression problem, belonging to the broader class of image-to-landmark problems studied in computer vision. Previous work shows that facial landmarks lie on a low-dimensional manifold

[48], justifying the manifold assumption in CoopSubNet. In our experiments we use the IMM facial landmark dataset [49], which contains 240 facial images of 24 different people with different head poses and illumination conditions. Each

image is annotated with 58 landmarks with correspondences. Because the annotation process is time consuming, the size of the dataset is relatively small for effective training of convolutional neural networks, making this task a perfect match for CoopSubNet. We split the IMM dataset into a training set (75%) and a test set (25%). The network architecture is described in Figure

5. Results on the test set are computed in terms of squared errors between the predicted and ground truth facial landmarks and are enumerated in Table 3. Each experiment was performed 5 times with a different sampling of the training and testing data each time (i.e., random splits). The mean results are reported along with the standard deviations. We can see that the CoopSubNet outperforms all other methods by a significant margin. The higher accuracy of landmark prediction can be seen even by the naked eye, see Figure 4. The latent space dimension () used for CoopSubNet for these results was 32. As in the case with MNIST experiments, we ensure that the CoopSubNet has a relative reconstruction loss of approximately 1%(on entire testing data).

Method Landmark Error
Baseline 6.86 1.23
CoopSubNet 3.03 0.83
Dropout 6.31 0.34
L2-reg 6.41 0.66
HardCon 5.89 0.65
Table 3: Landmark detection quantitative results: CoopSubNet and HardCon bottlenecks are set to .

4.3 Cell Nuclei Probability Estimation

Method L2 Error Dice Precision Recall F1-Score
Baseline 0.0225 0.005 0.548 0.012 0.853 0.005 0.662 0.013 0.741 0.008
CoopSubNet 0.0192 0.002 0.564 0.020 0.846 0.007 0.735 0.022 0.794 0.011
Dropout 0.0205 0.001 0.528 0.002 0.878 0.002 0.658 0.006 0.747 0.005
L2-Reg 0.0233 0.000 0.418 0.032 0.769 0.010 0.528 0.025 0.621 0.025
HardCon 0.0291 0.007 0.107 0.004 0.064 0.001 0.011 0.002 0.013 0.002
Table 4: Nuclei Estimation Results. The bottlenecks for HardCon and CoopSubNet are set to . We report the reconstruction error (L2 Error), segmentation accuracy (Dice coefficient), nuclei detection accuracy (precision, recall and F1-score).

Cell nuclei detection is an image segmentation problem, which can be formulated as an image-to-image regression task, where the output image is a probability map (pixels represent probabilities of corresponding to a cell nucleus). This approach has been studied in recent work [50, 51]. Here, we use a public dataset from the Tissue Image Analytics Lab at Warwick, UK, as described in [16]. This dataset contains one hundred H&E stained histology images of colorectal adenocarcinomas, with more than 25,000 nuclei centers marked for detection tasks. We split this dataset into 75% training and 25% testing data. The CoopSubNet architecture for this task is shown in Figure 6. We train this network on image patches of

pixels with a stride of 50 in both dimensions, which results in 2700 images for training and 900 images for testing. The associated binary label map has a single pixel highlighted at the nuclei center, which makes the learning process much harder due to unbalanced class labels. To improve this, we perform a single morphological dilation iteration using a

square structuring unit. For CoopSubNet and reference methods we use difference between the estimated probability map and the ground truth nuclei map as the primary loss. We set the bottleneck dimension to 64 for CoopSubNet and HardCon. The results are demonstrated visually in Figure 7, and quantitatively in Table 4. The relative reconstruction loss of CoopSubNet on testing data is less than 2%. The proposed data-driven regularization mechanism shows qualitative (F1-Score) and quantitative improvement over other regularization methods. In particular, the recall improves significantly, implying a better detection of nuclei and fewer false negatives. This can also be visually observed in the zoomed in region of the second sample shown in third row of Figure 7. We can notice the HardCon results are quite blurry because the bottleneck dimension of 64 is insufficient for precise localization of the nuclei positions.

Quantitative metrics:

We use different evaluation metrics to quantify the performance of the nuclei detection task. First, we use an Euclidean norm distance between the output probability map and the ground truth labels (

). To compute detection-based metrics, we binarize the probability map using Otsu thresholding

[52], a clustering-based image thresholding, to obtain a binary output label map (). The segmentation accuracy is evaluated using the Dice coefficient [53] that quantify the spatial overlap between the and . For precision, recall and F1 scores, we employ a simple algorithm that captures the true essence of the number of nuclei detected. Starting with the , we apply connected components and compute the position of the center of mass for each connected component. These positions represent each detected nucleus. To find the true and false positives, we query at each of the positions, and if it corresponds to a unique nucleus then it is counted as a true positive (TP); otherwise, it is a false positive (FP). To compute the false negatives (FN), we count the number of nuclei in that were not queried at all. Using these three values, TP, FP and FN, we can compute the metrics, as follows: , and, .

4.4 Hyperparameter Selection

CoopSubNet has four major hyperparameters. Here, we discuss the strategies for their tuning.

Bottleneck Selection: The dimension of the bottleneck layer of the secondary network in CoopSubNet is an important hyperparameter that affects the training dynamics of the composite network. A smaller latent dimensionality magnifies the regularization effect, but a too small could derail learning the intrinsic structure of the underlying feature manifold. From our experiments, we found that based on the dataset and network architecture there is a sweet spot for that delivers the best performance. Alternatively, instead of tuning directly the dimension of the bottleneck, we tried to make the bottleneck conservatively large and instead impose an penalty on the latent vector (Figure 1) in addition to the composite network loss (Eq 1). In our experiments with this alternative approach, we found that this sparse penalty on a large-enough bottleneck leads to similar results as CoopSubNet. However, the direct CoopSubNet approach is easier to use.

Contribution of the CoopSubNet loss: The second hyperparameter is the weight of the CoopSubNet loss in the composite network loss (). In our experiments, we found that as long as the CoopSubNet loss has the same order of magnitude as the primary loss, the performance of the regularized primary network is not sensitive on the specific value of .

Burn-in iterations: We found that results are not particularly sensitive to the initialization phase (“burn-in”). In all experiments, we set the burn-in to

of the total epochs.

Feature space: We have a choice of which layer of the primary network we attach the CoopSubNet to (Figure 1). We experimented with different options in the MNIST architecture (Figure 2). We found that if we attach the CoopSubNet too close to the input (after the first or second convolutional layer), the CoopSubNet does not regularize as well. This can be attributed to the fact that the features are not yet defined enough to have an underlying low-dimensional representation. On the other hand, the performance of CoopSubNet only changes marginally when attached to layers close to the output layer of the primary network.

5 Conclusions and Future Work

This paper introduces a novel neural network architectural framework: a secondary cooperating subnetwork (CoopSubNet) acting as a data-dependent regularizer. The CoopSubNet is a flexible framework and can be used alongside any type of primary architecture. It is trained jointly with the primary network to enforce a soft constraint on the intermediate features ensuring they lie close to a low-dimensional manifold. The experimental results show that CoopSubNet is an effective regularization technique as compared to standard regularization methods in problems entailing limited data budgets.

In terms of future research, other approaches to data-driven regularization could be studied. Our experiments provide promising results with the penalty on bottleneck vector of CoopSubNet (discussed in Section 4.4). This motivates exploration of other sparse penalties such as the automatic relevance determination (ARD) prior [54]

or denoising autoencoders

[24]

. Other future research directions include exploring the performance of CoopSubNet in a system of networks such as GANs, or extending CoopSubNet to a semi-supervised learning framework, where the training data are only partially labeled.

References

  • [1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pp. 770–778, 2016.
  • [2] T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “Pcanet: A simple deep learning baseline for image classification?,” IEEE transactions on image processing, vol. 24, no. 12, pp. 5017–5032, 2015.
  • [3] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1653–1660, 2014.
  • [4]

    R. Ranjan, V. M. Patel, and R. Chellappa, “Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition,”

    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 1, pp. 121–135, 2019.
  • [5] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015.
  • [6] N. Liu and J. Han, “Dhsnet: Deep hierarchical saliency network for salient object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 678–686, 2016.
  • [7] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial landmark detection by deep multi-task learning,” in European conference on computer vision, pp. 94–108, Springer, 2014.
  • [8] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Learning deep representation for face alignment with auxiliary attributes,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 5, pp. 918–930, 2016.
  • [9] H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302, 2016.
  • [10] J. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P. H. Torr, “End-to-end representation learning for correlation filter based tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2805–2813, 2017.
  • [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations, 2015.
  • [12] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  • [13] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2018.
  • [14] A. Cruz-Roa, H. Gilmore, A. Basavanhally, M. Feldman, S. Ganesan, N. N. Shih, J. Tomaszewski, F. A. González, and A. Madabhushi, “Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent,” Scientific reports, vol. 7, p. 46450, 2017.
  • [15] J. Shi, S. Zhou, X. Liu, Q. Zhang, M. Lu, and T. Wang, “Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset,” Neurocomputing, vol. 194, pp. 87–94, 2016.
  • [16] K. Sirinukunwattana, S. e Ahmed Raza, Y.-W. Tsang, D. R. Snead, I. A. Cree, and N. M. Rajpoot, “Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images.,” IEEE Trans. Med. Imaging, vol. 35, no. 5, pp. 1196–1206, 2016.
  • [17] K. J. Lang and G. E. Hinton, “Dimensionality reduction and prior knowledge in e-set recognition,” in Advances in neural information processing systems, pp. 178–185, 1990.
  • [18] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
  • [19] B. N. Saha, G. Kunapuli, N. Ray, J. A. Maldjian, and S. Natarajan, “Ar-boost: Reducing overfitting by a robust data-driven regularization strategy,” in Machine Learning and Knowledge Discovery in Databases, (Berlin, Heidelberg), pp. 1–16, Springer Berlin Heidelberg, 2013.
  • [20] M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” Journal of Machine Learning Research, vol. 7, pp. 2399–2434, 11 2006.
  • [21] T. Evgeniou, C. A. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” J. Mach. Learn. Res., vol. 6, pp. 615–637, Dec. 2005.
  • [22] W. Zhu, Q. Qiu, J. Huang, R. Calderbank, G. Sapiro, and I. Daubechies, “Ldmnet: Low dimensional manifold regularized neural networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [23] M. E. Tipping and C. M. Bishop, “Probabilistic principal component analysis,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 61, no. 3, pp. 611–622, 1999.
  • [24] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” Journal of machine learning research, vol. 11, no. Dec, pp. 3371–3408, 2010.
  • [25] J. Kukacka, V. Golkov, and D. Cremers, “Regularization for deep learning: A taxonomy,” CoRR, vol. abs/1710.10686, 2017.
  • [26] A. Krogh and J. A. Hertz, “A simple weight decay can improve generalization,” in Advances in neural information processing systems, pp. 950–957, 1992.
  • [27] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive auto-encoders: Explicit invariance during feature extraction,” in ICML, 2011.
  • [28] M. Sajjadi, M. Javanmardi, and T. Tasdizen, “Regularization with stochastic transformations and perturbations for deep semi-supervised learning,” in NIPS, 2016.
  • [29]

    G. An, “The effects of adding noise during backpropagation training on a generalization performance,”

    Neural Computation, vol. 8, pp. 643–674, April 1996.
  • [30] T. Devries and G. W. Taylor, “Dataset augmentation in feature space,” CoRR, vol. abs/1702.05538, 2017.
  • [31] P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., pp. 958–963, Aug 2003.
  • [32] E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation policies from data,” CoRR, vol. abs/1805.09501, 2018.
  • [33] M. I. Razzak, S. Naz, and A. Zaib, Deep Learning for Medical Image Processing: Overview, Challenges and the Future, pp. 323–350. Cham: Springer International Publishing, 2018.
  • [34] E. J. Kim and R. J. Brunner, “Star–galaxy classification using deep convolutional neural networks,” Monthly Notices of the Royal Astronomical Society, vol. 464, pp. 4463–4475, 10 2016.
  • [35] M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321 – 331, 2018.
  • [36] R. Bhalodia, S. Y. Elhabian, L. Kavan, and R. T. Whitaker, “Deepssm: A deep learning framework for statistical shape modeling from raw images,” in Shape In Medical Imaging at MICCAI, vol. 11167 of Lecture Notes in Computer Science, pp. 244–257, Springer, 2018.
  • [37] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  • [38] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus, “Regularization of neural networks using dropconnect,” in Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pp. III–1058–III–1066, JMLR.org, 2013.
  • [39] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout networks,” in Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pp. III–1319–III–1327, JMLR.org, 2013.
  • [40] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, pp. 3431–3440, IEEE Computer Society, 2015.
  • [41] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, pp. 2672–2680, Curran Associates, Inc., 2014.
  • [42] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
  • [43] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta, “Learning a predictable and generative vector representation for objects,” CoRR, vol. abs/1603.08637, 2016.
  • [44] G. Alain and Y. Bengio, “What regularized auto-encoders learn from the data-generating distribution,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 3563–3593, 2014.
  • [45] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in

    In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’10). Society for Artificial Intelligence and Statistics

    , 2010.
  • [46] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pp. 448–456, JMLR.org, 2015.
  • [47] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [48] Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold based analysis of facial expression,” Image and Vision Computing, vol. 24, pp. 605–614, 06 2006.
  • [49] M. M. Nordstrøm, M. Larsen, J. Sierakowski, and M. B. Stegmann, “The IMM face database - an annotated dataset of 240 face images,” tech. rep., may 2004.
  • [50] H. Höfener, A. Homeyer, N. Weiss, J. Molin, C. F. Lundström, and H. K. Hahn, “Deep learning nuclei detection: A simple approach can deliver state-of-the-art results,” Computerized Medical Imaging and Graphics, vol. 70, pp. 43 – 52, 2018.
  • [51] M. Z. Alom, C. Yakopcic, T. M. Taha, and V. K. Asari, “Microscopic nuclei classification, segmentation and detection with improved deep convolutional neural network (DCNN) approaches,” CoRR, vol. abs/1811.03447, 2018.
  • [52] T. Kurita, N. Otsu, and N. Abdelmalek, “Maximum likelihood thresholding based on population mixture models,” Pattern recognition, vol. 25, no. 10, pp. 1231–1240, 1992.
  • [53] A. A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC medical imaging, vol. 15, no. 1, p. 29, 2015.
  • [54] M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of machine learning research, vol. 1, no. Jun, pp. 211–244, 2001.