UberNet: Training a `Universal' Convolutional Neural Network for Low-, Mid-, and High-Level Vision using Diverse Datasets and Limited Memory

by   Iasonas Kokkinos, et al.

In this work we introduce a convolutional neural network (CNN) that jointly handles low-, mid-, and high-level vision tasks in a unified architecture that is trained end-to-end. Such a universal network can act like a `swiss knife' for vision tasks; we call this architecture an UberNet to indicate its overarching nature. We address two main technical challenges that emerge when broadening up the range of tasks handled by a single CNN: (i) training a deep architecture while relying on diverse training sets and (ii) training many (potentially unlimited) tasks with a limited memory budget. Properly addressing these two problems allows us to train accurate predictors for a host of tasks, without compromising accuracy. Through these advances we train in an end-to-end manner a CNN that simultaneously addresses (a) boundary detection (b) normal estimation (c) saliency estimation (d) semantic segmentation (e) human part segmentation (f) semantic boundary detection, (g) region proposal generation and object detection. We obtain competitive performance while jointly addressing all of these tasks in 0.7 seconds per frame on a single GPU. A demonstration of this system can be found at http://cvn.ecp.fr/ubernet/.



There are no comments yet.


page 1

page 11

page 12


High-for-Low and Low-for-High: Efficient Boundary Detection from Deep Object Features and its Applications to High-Level Vision

Most of the current boundary detection systems rely exclusively on low-l...

Object detection via a multi-region & semantic segmentation-aware CNN model

We propose an object detection system that relies on a multi-region deep...

HyNNA: Improved Performance for Neuromorphic Vision Sensor based Surveillance using Hybrid Neural Network Architecture

Applications in the Internet of Video Things (IoVT) domain have very tig...

An end-to-end CNN framework for polarimetric vision tasks based on polarization-parameter-constructing network

Pixel-wise operations between polarimetric images are important for proc...

End-to-End Instance Segmentation with Recurrent Attention

While convolutional neural networks have gained impressive success recen...

Learning Dual Convolutional Neural Networks for Low-Level Vision

In this paper, we propose a general dual convolutional neural network (D...

DeepLofargram: A Deep Learning based Fluctuating Dim Frequency Line Detection and Recovery

This paper investigates the problem of dim frequency line detection and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Imagenet [88] VOC’07 [26] VOC’10 [26] VOC’12 [26] MS-COCO [62] NYU [74] MSRA10K [18] BSD [69]
Detection Partial Yes Yes Yes Yes No No No
Semantic Segmentation No Partial [71, 36] Partial Yes Yes No No
Instance Segmentation No Partial [71, 36] Partial Yes No No No
Human parts No No [17] No No No No No
Human landmarks No No [10] No Yes No No No
Surface Normals No No No No No Yes No No
Saliency No No No No No No Yes No
Boundaries No No [71] No No No No Yes
Symmetry No No No No Partial,[91] No No [97]
Table 1: No single training set can accommodate all vision tasks: several datasets contain annotations for multiple tasks, and have even been extended, e.g. [71, 17, 10], but as the number of task grows it becomes impossible to use one dataset for all.

Computer vision involves a host of tasks, such as boundary detection, semantic segmentation, surface estimation, object detection, image classification, to name a few. While Convolutional Neural Networks (CNNs) have been the method of choice for text recognition for more than two decades [56], they have only been recently shown to successful at handling effectively most, if not all, vision tasks.

While only considering works that apply to a single, static image we can indicatively list successes of CNNs in superresolution [24]

, colorization

[55], boundary detection [6, 28, 107, 49], symmetry detection [91], interest point detection [109], image descriptors [110, 92, 35], surface normal estimation [25, 103, 2], depth estimation [25, 63, 64], intrinsic image decomposition [73] shadow detection [108], texture classification [19], material classification [4], saliency estimation [112, 58], semantic segmentation [27, 67, 14], region proposal generation [85, 82, 30], instance segmentation [37, 82, 21], pose estimation, part segmentation, and landmark localization [111, 96, 98, 15, 81, 75, 3, 42, 104], as well as the large body of works around object detection and image classification e.g. [52, 89, 93, 94, 31, 85, 32, 40, 22]. Most of these works rely on finetuning a common pretrained CNN, such as the VGG network [93] or others [52, 40], which indicates the broad potential of these CNNs.

However, each of these works ends up with a task-specific CNN, and potentially a mildly different architecture. If one wanted to perform two tasks, one would need to train and test with separate networks. In our understanding a joint treatment of multiple problems can result not only in simpler, faster, and better systems, but will also be a catalyst for reaching out to other fields. One can expect that such all-in-one, “swiss knife” architectures will become indispensable for general AI, involving for instance robots that will be able to recognize the scene they are in, recognize objects, navigate towards them, and manipulate them. Furthermore, having a single visual module to address a multitude of tasks will make it possible to explore methods that improve performance on all of them, rather than developping narrow, problem-specific techniques. Apart from simplicity and efficiency, the problem can also be motivated by arguing that by training a network so as to accomplish multiple tasks one leaves smaller space for ‘blindspots’ [95], effectively providing a more complete specification of the network duties. Finally, the particular motivation for this research has been the interest in studying the synergy between different visual tasks (e.g. the long-standing problem of combining segmentation and recognition [47, 72, 9, 99, 50, 53, 68]), so this work can be understood as a first step in this direction.

The problem of using a single network to solve multiple tasks has been repeatedly pursued in the context of deep learning for computer vision. In

[89] a CNN is used for joint localization detection and classification, [25] propose a network that jointly solves surface normal estimation, depth estimation and semantic segmentation, while [33] train a system for joint detection, pose estimation and region proposal generation; [70] study the effects of sharing information across networks trained for complementary tasks, while more recently [8] propose the introduction of inter-task connections that can improve performance through task synergy, while [84] propose an architecture encompassing a host of face-related tasks.

Inspired by these advances, in this work we introduce two techniques that allow us to expand the range of tasks handled by a single deep network, and thereby make it possible to train a single network for multiple, diverse tasks, without sacrificing accuracy.

Our first contribution consists in exploring how a CNN can be trained from diverse datasets. This problem inevitably shows up once we aim at breadth, since no single dataset currently contains ground-truth for all possible tasks. As shown in Table. 3 high-level annotation (e.g. object positions, landmarks in PASCAL VOC [26]) is often missing from the datasets used for low-level tasks (e.g. BSD [69]), and vice versa. If we consider for instance a network that is supposed to be predicting both human landmarks and surface normals, we have no dataset where an image comes with annotations for both tasks, but rather disjoint datasets (NYU [74], and PASCAL VOC [26], or any other pose estimation dataset for keypoints) providing every image with annotations for only one of the two.

In order to handle this challenge we introduce in Sec. 3

a loss function that only relies on the ground truth available per training sample, shunning the losses of tasks for which no ground truth is available at this sample. We combine this loss function with Stochastic Gradient Descent, and end up updating a network parameter only once we have observed a sufficient number of training samples related to the parameter. This results in an asynchronous variant of backpropagation and allows us to train our CNN in an end-to-end manner.

Our second contribution aims at addressing the limitations of current hardware used for deep learning - in particular the limited memory available on modern Graphics Processing Units (GPUs). As the number of tasks increases, the memory demand of a naively implemented back-propagation algorithm can increase linearly in the number of tasks, with a factor proportional to the memory requested by task-specific network layers. Instead, we build on recent developments in learning with deep architectures [34, 16] which have shown that it is possible to efficiently train a deep CNN with a memory complexity that is sublinear in the number of layers. We develop a variant that is customized to our multi-task architecture and allows us to perform end-to-end network training for a practically unlimited number of tasks, since the memory complexity is independent of the number of tasks.

Our current architecture has been systematically evaluated on the following tasks (a) boundary detection (b) normal estimation (c) saliency estimation (d) semantic segmentation (e) semantic part segmentation (f) semantic boundary detection and (g) proposal generation and object detection. Our present system operates in 0.6-0.7 seconds per frame on a GPU and delivers results that are competititve with the state-of-the-art for these tasks.

We start by specifying in Sec. 2 the architecture of our CNN and then turn to our contributions on learning from diverse datasets and dealing with memory constraints in Sec. 3 and Sec. 4 respectively.

2 UberNet architecture

We now introduce the architecture of our network, shown in Fig. 2. Aiming at simplicity, we introduce a minimal number of additional, task-specific layers on top of a common CNN trunk that is based on the VGG network. Clearly, we can always include on top additional layers and parameters, e.g. U-net type architectures [87, 76, 75, 29], Dense CRF post-processing [51, 14, 113] or bilateral filter-type smoothing [39, 12], as well as more general structured prediction with CNN-based pairwise terms [61, 11] - but we leave this for future work.

Figure 2: UberNet architecture for jointly solving multiple labelling tasks: an image pyramid is formed by successive downsampling operations, and each image is processed by a CNN with tied weights; skip layer pooling at different network layers of the VGG network (

) is combined with Batch Normalization (

) to provide features that are then used to form all task-specific responses (); these are combined across network layers () and resolutions (

) to form task-specific decisions. Loss functions at the individual-scale and fused responses are used to train task responses in a task-specific manner. For simplicity we omit interpolation, normalization and object detection layers - further details are provided in the text.

The starting point is that of using a standard ‘fully’ convolutional network [56, 89, 79, 67], namely a CNN that provides a field of decision variables, rather than a single classificiation at its output; this can be used to accomplish any dense labelling or regression task, such as boundary detection, normal estimation, or semantic segmentation. We now describe our modifications to this basic architecture.

Skip layers: one first deviation from the most standard architecture is that as in [89, 38, 107, 49]

we use skip layers that combine the top-layer neurons with the activations of intermediate neurons to form the network output. Tasks such as boundary detection clearly profit from the smaller degree of spatial abstraction of lower-level neurons

[107], while even for high-level tasks, such as semantic segmentation, it has been shown [38, 67] that skip layers can improve performance. In particular we pool features from layers of the VGG-16 network, which show up as in Fig. 2.

Skip-layer normalization: Modifying slightly [66, 5], we use batch normalization [43] prior to forming the inner product with intermediate layers; this alleviates the need for very low learning rates, which was the case in [107, 49]. One exception is the last layer ( in VGG, in Fig. 2

) which has already been trained so as to be an appropriate argument to a linear classifier (

), and therefore seemed to be doing better without normalization.

Cumulative task-specific operations: Scaling up to many tasks requires keeping the task-specific memory and computation budget low, and we therefore choose as in [107, 49] to process the outputs of the skip-pooling with task-specific layers that perform linear operations. In particular if we denote by the neuron activations at layers up to that are used to obtain the score at a given image position the output for task is a linear function:


Rather that explicitly allocating memory for the vector

formed by contactenating the intermediate activations and then forming the matrix product, we instead compute the intermediate results per layer, and then add them up. This yields the same result, but acts like a low-memory online accumulation of scores across skip layers.

Fusion layers: For the fusion layers, denoted by circular connections in Fig. 2 we observe that instead of simply adding the scores (sum-fusion), one can accelerate training by concatenating the score maps and learning a linear function that operates on top of the concatenated score maps - as originally done in [107]. This scheme is clearly still learning in the end a linear function with the same number of free parameters, but this decomposition, which can be intuitively understood as some form of preconditioning, seems to be more effective. When also back-propagating on the intermediate layers, this typically also results in better performance. We note that for simplicity we assume correspondence across layer positions; these are handled in our network by appropriate interpolation layers which in our diagram are understood to be included in the circular nodes.

Atrous convolution: We also use convolution with holes (à trous) [79, 14]

which allows us to control the spatial resolution of the output layer. In particular we use à trous convolution to obtain an output stride of 8 (rather than 16), which gives us a moderate boost in tasks such as boundary detection or semantic segmentation.

Multi-resolution CNN: as in [46, 79, 49, 15], rather than processing an image at a single resolution, we form an image pyramid and pass scaled versions of the same image through CNNs with shared weights. This allows us to deal with the scale variability of image patterns. Even though in [15] a max-fusion scheme is shown to yield higher accuracy than sum-fusion, in our understanding this is particular to the case of semantic segmentation, where a large score at any scale suffices to assign an object label to a pixel. This may not be the case for boundaries where the score should be determined by the accumulation of evidence from multiple scales [105] or for normals, where maximization over scales of the normal vector entries does not make any sense. We therefore use a concatenation of the scores followed by a linear operation, as in the case of fusing the skip-layers described above, and leave the exploration of scale-aware processing [15] for the future.

This multi-resolution processing is incorporated in the network definition, and as such can be accounted for during end-to-end training. In this pyramid the highest resolution image is set similar to [32] so that the smallest image dimension is 621 pixels and the largest dimension does not exceed 921 (the exact numbers are so that dimensions are of the form , as requested by [49, 15]).

As in [49, 15] we use loss layers both at the outputs of the individual scales and the final responses, amounting to a mild form of deep supervision network (DSN) training [107].

Task-specific deviations: All of these choices have been separately validated on individual tasks, and then integrated in the common architecture shown in Fig. 2. There are still however some task-specific deviations.

One exception to the uniform architecture outlined above is for detection, where we follow the work of [85] and learn a convolutional region proposal network, followed by a fully-connected subnetwork that classifies the region proposals into one of 21 labels (20 classes and background). Recent advances however [22, 65] may make this exception unnecessary.

Furthermore, the output of each of the task-specific streams is penalized by a loss function that is adapted to the task at hand. For region labelling tasks (semantic segmentation, human parts, saliency) and object detection we use the softmax loss function, as is common in all recent works on semantic segmentation [67, 13] and object detection [32]. For regression tasks (normal estimation, bounding box regression) we use the smooth loss [32]. For normal estimation we apply an normalization prior to penalizing with the loss, since surface normals are unit-norm vectors. For tasks where we want to estimate thin structures (boundaries, semantic boundaries) we use the MIL-based loss function introduced in [49] in order to accommodate imprecision in the placement of the boundary annotations. For these two tasks we also have a class imbalance problem, with many more negatives than positives; we mitigate this by using a weighted cross-entropy loss, as in [107], where we attribute a weight of to positives and to negatives.

Furthermore, for low-level tasks such as boundary detection, normal estimation and saliency estimation, as well as semantic boundary detection we set the spatial resolution of the scores to be equal to that of the image - this allows us to train with a loss function that has a higher degree of spatial accuracy and allows us to accurately localize small structures. For region labelling tasks such as semantic segmentation, or human part segmentation we realized that we do not really need this level of spatial accuracy, and instead train score maps that employ a lower spatial resolution, using a downsampling factor of 8 with respect to the original image dimensions. This results in a 64-fold reduction of the task-specific computation and memory demands.

3 Multi-Task Training using Diverse Datasets

Having described our network’s architecture, we now turn to parameter estimation. Our objective is to train in an end-to-end manner both the VGG-based CNN trunk that delivers features to all tasks, and the weights of the task-specific layers.

As described in the introduction, the main challenge that we face is the diversity of the tasks that we wish to cover. In order to handle the diversity of the available datasets one needs to handle missing ground truth data during training. Certain recent works such as [78, 77, 20]

manage to impute missing data in an EM-type approach, by exploiting domain-specific knowledge - e.g. by requesting that a fixed percentage of the pixels contained in the bounding box of an object obtain the same label as the object. This however may not be possible for arbitrary tasks, e.g. normal estimation.

Instead, we propose to adapt the loss function to the information that we have per sample, and set to zero the loss of tasks for which we have no ground-truth. While the idea is straightforward, as we describe below some care needs to be taken when optimizing the resulting loss with backpropagation, so as to ensure that the (stochastic) estimates of the parameter gradients accumulate evidence from a sufficient number of training samples.

Our training objective is expressed as the sum of per-task losses, and regularization terms applied to the parameters of task-specific, as well as shared layers:


In Eq. 3 we use to index tasks; denotes the weights of the common CNN trunk, and are task-specific weights;

is an hyperparameter that determines the relative importance of task

, is an regularization on the relevant network weights, and is the task-specific loss function.

This task-specific loss is written as follows:


where we use to index training samples, denote by , the task-specific network prediction and ground truth at the -th example respectively, by the task-specific network parameters, and by we indicate whether example comes with ground-truth for task . If we can set to an arbitrary value, without affecting the loss - i.e. we do not need to impute the ground truth.

We now turn to accounting for the interplay between the term and the minimization of Eq. 3

through Stochastic Gradient Descent (SGD). Since we want to train our network in an end-to-end manner for all tasks, we consider as our training set the union of different training sets, each of them containing pairs of images and ground-truth for distinct tasks. Images from this set are sampled uniformly at random; as is common practice, we use multiple epochs and within each epoch sample without replacement.

  Synchronous SGD - backprop  

  for  to  do
     {construct minibatch}
     {initialize gradient accumulators}
     for  do
        {cnn gradients}
        {Task gradients, }
     end for
     for  do
     end for
  end for  
Table 2: Pseudocode for the standard, synchronous stochastic gradient descent algorithm for back-propagation training. We update all parameters at the same time, after observing a fixed number of samples.

  Asynchronous SGD - backprop  

  {initialize gradient accumulators}
  {initialize counters}
  for  to  do
     {cnn gradients & counter: always updated}
     for  do
        if  then
           {update accumulator and counter for task if the current sample is relevant}
        end if
     end for
     for  do
        if  then
           {update parameters if we have seen enough}
        end if
     end for
  end for  
Table 3: Pseudocode for our asynchronous stochastic gradient descent algorithm for back-propagation training. We update a task-specific parameter only after observing sufficient many training samples that pertain to the task.

Considering that we use a minibatch of size , plain SGD for task would lead to the following update rules:


where the weight decay term results from regularization and denotes the gradient of the loss for task with respect to the parameter vector . The difference between the two update terms is that the parameters of the common trunk, are affecting all tasks, and as such accumulate the gradients over all tasks, while task-specific parameters are only affected by the subset of images for which .

We observe that this comes with an important flaw: if we have a small batch size, the update rule for may use a too noisy gradient if happens to be small - it may even be that no task-related samples happen to be in the present minibatch. We have empirically observed that this can often lead to erratic behaviour, which we originally handled by increasing the minibatch size to quite large numbers (50 images or more, as opposed to 10 or 20). Even though this mitigates the problem partially, firstly it is highly inefficient timewise, and will also not scale up to solving say 10 or 20 tasks simultaneously.

Instead of this brute-force approach we propose a modified variant of backpropagation that more naturally handles the problem by updating the parameters of a task only once sufficiently many relevant images have been observed. Pseudocode presenting this scheme, in contrast to the standard SGD scheme is provided in Table. 3.

In particular we no longer have ‘a minibatch’, but rather treat images in a streaming mode, keeping one counter per task - as can be seen, rather than outer loops, which is the number of minibatches, we use outer loops, equalling the number of images treated by the original scheme.

Whenever we process a training sample that contains ground truth for a task, we increment the task counter, and add the current gradient to a cumulative gradient sum. Once the task counter exceeds a threshold we update the task parameters and then reset the counter and cumulative gradient to zero. Clearly, the common CNN parameters are updated regularily, since their counter is incremented for every single training image. This is however not the case for other tasks which may not be affected by a subset of training images.

This results in an asynchronous variant of backpropagation, in the sense that any parameter can be updated at a time instance that is independent of the others. We note that apart from implementing the necessary book-keeping, this scheme requires no additional memory or computation. It is also clear that the ‘asynchronous’ term relates to the manner in which parameters for different tasks are updated, rather than the computation itself, which in our implementation is single-node.

We also note that according to the pseudocode, we allow ourselves to use different ‘effective batchsizes’, , which we have observed to be useful for training. In particular, for detection tasks it is reported in [32] that a batchsize of two suffices, while for dense labelling tasks such as semantic segmentation a batchsize of 10, 20 or even 30 is often used [107, 14]. In our training we use an effective batchsize of 2 for detection, for all other task-specific parameters, and for the shared CNN features, . The reasoning behind using this larger batch size for the shared CNN features is that we want their updates to absorm information from a larger number of images, containing multiple tasks, so that the task-specific idiosyncracies will cancel out. In this way it becomes more likely that the average gradient will serve all tasks and we avoid having the ‘moving target’ problem, where every task quickly changes the shared representation of the other tasks, making optimization harder.

One subtle difference is that in synchronous SGD the stochastic estimate of the gradient used in the update equals:


while for the asynchronous case it will equal:


where indicates a subsequence of samples which contain ground-truth for task . We realize that the first estimate can be expected to have a typically smaller magnitude that the second one, since several of the terms being averaged will equal zero. This implies that we have somehow modified the original cost function, since the stochastic gradient estimates do not match. However this effect can be absorbed in the (empirically set) hyperparameters so that the two estimates will have the same expected magnitude, so we can consider the two algorithms to be optimizing the same quantity.

Figure 3: Vanilla backpropagation for a single task; memory lookup operations are indicated by black arrows, storage operations are indicated by orange and blue arrows for the forward and backward pass respectively. During the forward pass each layer stores its activation signals in the bottom boxes. During the backward pass these activation signals are combined with the gradient signals (top boxes) that are computed recursively, starting from the loss layer.
(a) Low-memory forward pass
(b) Low-memory backpropagation (7-9)
(c) Low-memory backpropagation (4-6)
(d) Low-memory backpropagation (1-3)
Figure 8: Low-memory backpropagation for a single task (same color code as in Fig. 3). We first store a subset of activations in memory, that then serve as ‘anchor’ points for running backpropagation on smaller networks. This reduces the number of layer activations/gradients that are simultaneously stored in memory.
Figure 9: Vanilla backpropagation for multi-task training: a naive implementation has a memory complexity , where here is the depth of the common CNN trunk, is the depth of the task-specific branches and is the number of tasks.
(a) Low-memory forward pass
(b) Low-memory backpropagation - task a
(c) Low-memory backpropagation - task b
(d) Low-memory backpropagation (4-6)
(e) Low-memory backpropagation (1-3)

4 Memory-Bound Multi-Task Training

We now turn to handling memory limitations, which turns out to be a major problem when training a network for many tasks. In order to handle these problems we build on recent advances in memory-efficient backpropagation for deep networks [34, 16] and adapt them to the task of multi-task learning111I thank George Papandreou for suggesting this direction. We start by describing the basic idea behind the algorithm of [16], paving the way for the presentation of our extension to multi-task learning.

The baseline implementation of the back-propagation algorithm maintains all intermediate layer activations computed during the forward pass. As illustrated in Fig. 3

, during the backward pass each layer then combines its stored activations with the back-propagated gradients coming from the layer(s) above, finds the gradients for its own parameters, and then back-propagates gradients to the layer(s) below. While this strategy achieves computational efficiency by reusing the computed activation signals, it is memory-demanding, since it requires storing all intermediate activations. In the popular Caffe

[44] library memory is also allocated for all of the gradient signals, since a priori these could feed into multiple layers for a DAG network.

If we consider for simplicity that every layer requires bytes of memory for its activations, and gradient signals, and we have a network with a total of layers, the memory complexity of a naive implementation would be - which can become prohibitive for large values of .

The memory-efficient alternative described in [16] is shown in Fig. 8. In a first step, shown in Fig. 8(a), we perform a first forward pass through the network where we store the activations of only a subset of the layers - for a network of depth , activations are stored, lying layers apart, while the other intermediate activations, shown in grey, are discarded as soon as they are used. Once this first stage is accomplished, we run times backpropagation over sub-networks of length , as shown in Fig. 8(b)-(d). The stored activations help us start the backpropagation at a deeper layer of the network, acting like anchor points for the computation: any subnetwork requires the activation of its lowest level, and the gradient signal at its highest layer. It can be seen that through this scheme the total complexity can be reduced from to , since we retain activation signals, and at any step perform back-propagation over a subnetwork of length .

This algorithm was originally introduced for chain-structured graphs; having described it, the adaptation to our case is straightforward.

Considering that we have layers for the shared CNN trunk, tasks, and layers per task, the memory complexity of the naive implementation would be , as can also be seen from Fig. 9.

A naive application of the algorithm presented above would result in a reduction of memory complexity down to . However, we realize that after the branching point of the different tasks (layer 6 for our figure), the computations are practially decoupled: each task-specific branch works effectively on its own, and then returns a gradient signal to layer 6. These gradient signals are accumulated over tasks, since our cost is additive over the task-specific losses.

Based on this observation, we realize that the memory complexity can be reduced to be independent of : since each task can ‘clean up’ all of the memory allocated to it, this results in a memory footprint of rather than .

This has allowed us to load an increasing number of tasks on our network without encountering memory issues. Using a 12GB Nvidia card we have been able to use a three-layer pyramid, with the largest image size being 921x621, and using skip-layer connections for all network layers, pyramid levels, and tasks, for seven tasks. The largest dimension that would be possible without the memory-efficient option for our present number of tasks would be 321x321 - and that would only decrease as more tasks are used.

Apart from reducing memory demands, we notice that we can also reduce computation time by performing a lazy evaluation of the gradient signals accumulated at the branching point. In particular, if a training sample does not contain ground-truth for certain tasks, these will not contribute any gradient term to the common CNN trunk; as such the computation over task-specific branches can be avoided for an instance that does not contain ground-truth for the task. This results in a substantial acceleration of training (2- to 4-fold in our case), and would be essential to scale up training for even more tasks.

5 Experiments

Our experimental evaluation has two objectives: The first one is to show that the generic UberNet architecture introduced in Sec. 2 successfully addresses a broad range of tasks. In order to examine this we compare primarily to results obtained by methods that rely on the VGG network [93] - more recent works e.g. on detection [22] and semantic segmentation [14] have shown improvements through the use of deeper ResNets [40], but we consider the choice of network to be in a sense orthogonal to the goal of this section.

The second objective is to explore how incorporating more tasks affects the performance in the individual tasks. In order to remove erroneous sources of variation we use a common initialization for all single- and multi- task networks, obtained by pretraining a network for joint semantic segmentation and object detection, as detailed in Sec. 5.1. Furthermore, the multi-task network is trained with a union of datasets corresponding to the multiple tasks that we aim at solving. There we have used a particular proportion of images per dataset, so as to moderately favor high-level tasks, as detailed in Sec. 5.1. Even though using a larger task-specific dataset may boost performance for the particular task, the single task networks are only trained with the subset of the multi-task dataset that pertains to the particular task. This may be sacrificing some performance with respect to competing methods, but ensures that the loss term pertaining to a task is unaffected by the single- versus multi-task training, and facilitates comparison.

5.1 Experimental settings

Optimization: For all of the single-task experiments we use SGD with a momentum of 0.9 and a minibatch size of 10 - with the exception of detection, where we use a minibatch size of 2, following [32]. For the multi-task experiments we use our asynchronous SGD algorithm with effective minibatch sizes of 2 for detection-related parameters, 10 for other task-specific parameters and 30 for the shared CNN features, as justified in Sec. 3. With the exception of the initialization experiment described right below, we always use 5000 iterations, starting with a learning rate of 0.001 and decrease the learning rate by a factor of 10 after 3000 iterations. Other optimization schemes will be explored in a future version of this work.

Initialization of labelling and detection network: We use a common initialization for all experiments, which requires having at our disposal parameters for both the convolutional labelling tasks, and the region-based detection task. We could use the ImageNet-pretrained VGG network for this, but exploiting pretraining on MS-COCO has been shown to yield boosts in performance e.g. in [14, 32]. Leaving a joint pretraining on MS-COCO for a future version of this work, we take a shortcut and instead form a ‘frankenstein’ network where we stitch together two distinct variants of the VGG network, which have both been pretrained on MS-COCO. In particular we use the network of [14] for semantic segmentation (‘COCO-S’) and the network of [32] for detection, (‘COCO-D’).

The two-task network has (i) a common convolutional trunk, up to the fifth convolutional layer, (ii) a detection branch, combining an RPN and an SPP-Pooling layer followed by two fully-connected layers (), and (iii) a fully-convolutional branch, used for semantic segmentation. The fully-connected branches in (ii) and (iii) are initialized with the parameters of the respective pretrained networks, COCO-D, COCO-S, while for (i) we initialize the parameters of the common layers with the COCO-D parameters. We finetune this network for 10000 iterations on the VOC07++ set [32] stands for the union of PASCAL VOC 2007 trainval and PASCAL VOC 2012 trainval sets; we start with a learning rate of and decrease it to after 6000 iterations.

Datasets: A summary of the datasets used in our experiments is provided in Table. 4. The 5100 images in the BSD dataset correspond to dataset augmentation of the 300 trainval images of BSD with 16 additional rotations. All of these numbers are effectively doubled with flipping-based dataset augmentation, while the VOC-related datasets are used twice, which amounts to placing a higher emphasis on the high-level tasks.

We note that the VOC’12 validation is used for the evaluation of the human part segmentation, semantic boundary detection and saliency estimation tasks. This means that in general we report numbers for two distinct networks: one where the VOC2012 validation set is included during training, based on which we report results on detection and semantic segmentation; and one where VOC2012 validation is excluded from training, which gives us results on human parts, semantic boundaries, and saliency.

trainval train val
5011 5717 5823 23024 10000 5100
Detection 5011 5717 5823 0 0 0
S. Segmentation 422 4998 5105 0 0 0
S. Boundaries 0 4998 5105 0 0 0
Human Parts 0 1716 1817 0 0 0
Normals 0 0 0 23024 0 0
Saliency 0 0 0 0 10000 0
Boundaries 0 4998 5105 0 0 5100
Table 4: Datasets and numbers of images containing ground truth for the different tasks considered in this work.



Surface Normals


Sem.c Boundaries

Sem.c Segmentation

Object Detection

Human Parts

Figure 15: Qualtitative results of our network. Please note the human pictures detected in the first two columns, as well as the range of scales successfully handled by our network.



Surface Normals


Sem.c Boundaries

Sem.c Segmentation

Object Detection

Human Parts

Figure 16: Qualtitative results, continued: please note that the leftmost image has practically no color information, which can justify the mistake of the semantic segmentation and object detection tasks: the left cactus is incorrectly labelled as a chair, apparently mistaken for a thorny throne.

5.2 Experimental Evaluation

Object Detection: We start by verifying in the ‘Ours 1-Task’ row that we can replicate the results of [32]; exceptionally for this experiment, rather than using the initialization described above, we start from the MS-COCO pretrained network of [32], finetune on the VOC 2007++ dataset, and test on the VOC 2007 test dataset. The only differences are that we use a minimal image side of 621 rather than 600, a maximal side of 961 rather than 1000, so as to comply with the restriction on dimensions of [14], and use convolution with holes followed by appropriately modified RPN and ROI-pooling layers to get effectively identical results as [32]. Adding holes to the RPN network did not seem to help (not reported).

In the following row we measure the performance of the network obtained by training for the joint segmentation and detection task, which as mentioned in Sec. 5.1 will serve as our starting point for all ensuing experiments. After finetuning on VOC2007++ we observe that we actually get a small boost in performance, which is quite promising, since it is likely to be telling us that the additional supervision signal for semantic segmentation helped the detection sub-network learn something better about detection.

However when increasing the number of tasks performance drops, but is still comparable to the strong baseline of [32]. As we will see in the Sec. 5.3 this is not necessarily obvious - a difference choice of task weight parameters can adversely influence detection performance while favoring other tasks.

Method mAP
F-RCNN, [32] VOC 2007++ 73.2
F-RCNN, [32] MS-COCO + VOC 2007++ 78.8
Ours, 1-Task 78.7
Ours, 2-Task 80.1
Ours, 7-Task 77.8
Table 5: Mean Average Precision performance (%) on the PASCAL VOC 2007 test set.

Semantic Segmentation: The second task that we have tried is semantic segmentation. Even though a really broad range of techniques have been devised for the problem (see e.g. review in [14] for a recent comparison), we only compare to the methods lying closest to our own, which in turns relies on the ‘Deeplab-Large Field of View (FOV)’ architecture of [13]. We remind that, as detailed in Sec. 2, we deviate from the Deeplab architecture by using linear operations on top of skip layers, by using a multi-scale architecture, and by not using any DenseCRF post-processing.

Method mean IoU
Deeplab -COCO + CRF [78] 70.4
Deeplab Multi-Scale [49] 72.1
Deeplab Multi-Scale -CRF [49] 74.8
Ours, 1-Task 72.4
Ours, 2-Task 72.3
Ours, 7-Task 68.7
Table 6: Semantic segmentation - mean Intersection Over Union (IOU) accuracy on PASCAL VOC 2012 test.

We first observe that thanks to the use of multi-scale processing we get a similar improvement over the single-scale architecture as the one we had obtained in [49]. Understandably this ranks below the latest state-of-the-art results, such as the ones obtained e.g. in [14] with ResNets and Atrous Spatial Pyramid Pooling; but these advances are complementary and easy to include in our network’s architecture.

Turning to the results of the two-task architecture, we observe that quite surprisingly we get effectively the same performance. This is not obvious at all, given that for this two-task network our starting point has been a VGG-type network that uses the detection network parameters up to the fifth convolutional layer, rather than the segmentation parameters. Apparently, after 10000 iterations of fine-tuning the shared representation was modified to be appropriate for the semantic segmentation task.

Turning to the multi-task network performance, we observe that performance drops as the number of tasks increases. Still, even without using CRF post-processing, we fare comparably to a strong baseline, such as [78].

Human Part Segmentation: This task can be understood as a special case of semantic segemntation, where now we aim at assigning human part labels. Recent work has shown that semantic part segmentation is one more task that can be solved by CNNs [98, 106, 15, 60, 14].

Method mean IoU
Deeplab LargeFOV [106] 51.78
Deeplab LargeFOV-CRF[106] 52.95
Multi-scale averaging [15] 54.91
Attention [15] 55.17
Auto Zoom [106] 57.54
Graph-LSTM [60] 60.16
Ours, 1-Task 51.98
Ours, 7-Task 48.82
Table 7: Part segmentation - mean Intersction-Over-Union accuracy on the dataset of [17].

We use the dataset introduced in [17] and train a network that is architecturally identical to the one used for semantic segmentation, but is now finetuned for the task of segmenting human parts. As a general comment on this task we can observe that here structured prediction yields quite substantial improvements, apparently due to the highly confined structure of the output space in this labelling task. Since we do not use any such post-processing yet, it is only fair to compare to the first, CRF-free method, and leave an integration with structured prediction for future work.

As can be seen in Table. 10 for the single-task case we perform comparably. However, for the multi-task case performance drops substantially (more than 10). This may be due to the scarcity of data that contain annotation for the task: when training the single task network all of the data contain annotations for human parts, while when training the multi-task network we have human part annotations in only 3432 out of the 59552 images used to train the whole network (referring to Table. 4, we remind the reader that we use twice the VOC-related datasets). One potential remedy is to increase the weight of the task’s loss, or the learning rates of the task-specific parameters, so that the parameter updates are more effective; another alternative is to give the multi-task network more training iterations, so that we pass more times over the part annotations. We are exploring these options.

Semantic Boundary Detection: We evaluate our method on the Semantic Boundary Detection task defined in [36], where the goal is to find where instances of the 20 PASCAL classes have discontinuities. This can be understood as a combination of semantic segmetnation and boundary detection, but can be tackled head-on with fully convolutional networks. We train on the VOC2012 train and evaluate on VOC2012 val.

Method mAP mMF
Semantic Contours [36] 20.7 28.0
Situational Boundary [100] 31.6 -
High-for-Low [7] 47.8 58.7
High-for-Low-CRF [7] 54.6 62.5
Ours, 1-Task 54.3 59.7
Ours, 7-Task 44.3 48.2
Table 8: Semantic Boundary Detection Results: we report mean Average Precision (AP) performance (%) and Mean Max F-Measure Score on the validation set of PASCAL VOC 2010, provided by [36].

We compare to the original method of [36], the situational boundary detector of [100], and the High-for-Low method of [7]. The authors of [7] go beyond the individual task of boundary detection and explore what gains can be obtained by providing as inputs to this task the results of a separate semantic segmentation system (‘High-for-Low-CRF’ row). Even though combining the outputs of different tasks is one of our immediate next goals, we do not consider it yet here. Still, we observe that even applying our architecture out-of-the-box we get reasonably close results, and substantially better than their standalone semantic boundary detection result. Performance deteriorates for the multi-task case, but remains quite close to the current ‘standalone’ state-of-the-art.

Boundary Detection: We train our network on the union of the (dataset-augmented) BSD trainval set and boundary images from the VOC context dataset [71] and evaluate it on the test set of the Berkeley Segmentation Dataset (BSD) of [69] and. We compare our method to some of the best-established methods of boundary detection [1, 23], as well as more recent, deep learning-based ones [48, 28, 6, 41, 90, 107, 49].

gPb-owt-ucm [1] 0.726 0.757 0.696
SE-Var [23] 0.746 0.767 0.803
DeepNets [48] 0.738 0.759 0.758
N4-Fields [28] 0.753 0.769 0.784
DeepEdge [6] 0.753 0.772 0.807
CSCNN [41] 0.756 0.775 0.798
DeepContour [90] 0.756 0.773 0.797
HED-fusion [107] 0.790 0.808 0.811
HED-late merging [107] 0.788 0.808 0.840
Multi-Scale[49] 0.809 0.827 0.861
Multi-Scale +sPb [49] 0.813 0.831 0.866
Ours, training setup of [49] 0.815 0.835 0.862
Ours, 1-Task 0.791 0.809 0.849
Ours, 7-Task 0.785 0.805 0.837
Table 9: Boundary Detection results: we report the maximal F meaure obtained at the Optimal Dataset Scale, Optimal Image Scale, as well as the Average Precision on the test set of the BSD dataset [69].

A first experiment has been to train our new network with the exact same experimental setup as the one we had used in [49] - including Graduated Deep Supervised Network training, a mix of 30600 images obtained by dataset augmentation from the BSD (300 images augmented by 3 scales, 2 horizontal flips, and 16 rotations) with 20206 images from VOC-context (10103 images with two horizontal flips). The differences are that we now use batch normalization, which allows us to increase the layer-specific learning rates up to 10, and also that we now use the ‘convolutionalized’ fully-connected layers of the VGG network. The improvement in performance is quite substantial: the maximal F-measure increases from 0.809 to 0.815, surpassing even the performance we would get in [49] by using spectral boundaries on top.

Still, these settings (graduated DSN, high learning rates) were not as successful on the remaining tasks, while using a mix of data where the images of BSD are three times more than the images of VOC would skew the performance substantially in favor of the low-level task of boundary detection - since the training objective is clearly affected by the number of images containing ground truth for one task versus the other.

We therefore remove the side losses for the skip layers, reduce the layer-specific learning rate to 1 (which is still quite higher than the 0.001 layer-specific rate used in [107, 49]), and use the particular mix of data used to train the UberNet in the multi-task setup. This means that, after left-right flipping, we use 10200 boundary samples from BSD (i.e. no scale augmentation) and 20206 samples from VOC.

As shown in the ‘Ours, 1 Task’ row of Table. 9 this can substantially affect performance - but we still remain competitive to previous state-of-the-art works, such as [107]. For the multi-task training case performance drops a bit more, but always stays at a reasonably good level when compared to standard strong baselines such as [107].

Saliency Estimation: We train on the MSRA-10K dataset of [101] and evaluate on the PASCAL-S dataset of [59], where a subset of the Pascal VOC10 validation set is annotated. Additional datasets are typically used to benchmark this task, e.g. in [58], we will explore the performance of our method on those datasets in the future.

We only use flipping for dataset augmentation during training. We compare to some classic methods [80, 18, 45] as well as more recent ones [83, 102, 112, 58] that typically rely on deep learning. We note that our method sets a new state-of-the-art for this dataset, and even for the multi-task training case, our method outperforms the previous state-of-the-art which was the CRF-based variant of [58].

Method MF
SF [80] 0.493
GC [18] 0.539
DRFI [45] 0.690
PISA [101] 0.660
BSCA [83] 0.666
LEGS [102] 0.752
MC [112] 0.740
MDF [57] 0.764
FCN [58] 0.793
DCL [58] 0.815
DCL + CRF [58] 0.822
Ours, 1-Task 0.835
Ours, 7-Task 0.823
Table 10: Saliency estimation results: we report the Maximal F-measure (MF) on the PASCAL Saliency dataset of [59].

Surface Normal Estimation: For this task surface normals are typically estimated from point cloud data, rather than directly measured. Both the training, and also the evaluation of a normal estimation algorithm may therefore be affected by this step. We train on the training set if [74] where normals are estimated by [54] and extend it with 20K images of normal ground truth estimated from the raw images in the training scenes of [74]; since the method of [54] is not publicly available, we use as a surrogate the method of [86]. Competing methods, e.g. [25, 2] are using alternative normal estimation methods for the extended data, but we would not expect the differences because of this to be too large.

Method Mean Median
VGG-Cascade [25] 22.2 15.3 38.6 64.0 73.9
VGG-MLP [2] 19.8 12.0 47.9 70.0 77.8
VGG-Design [103] 26.9 14.8 42.0 61.2 68.2
VGG-fused [103] 27.9 16.6 37.4 59.2 67.1
Ours, 1-Task 21.4 15.6 35.3 65.9 76.9
Ours, 1-Task 23.2 17.0 32.5 62.0 73.5
Ours, 1-Task 23.3 17.6 31.1 60.8 72.7
Ours, 1-Task 23.9 18.1 29.8 59.7 71.9
Ours, 7-Task 26.7 22.0 24.2 52.0 65.9
Table 11: Normal Estimation on NYU-v2 using the ground truth of [54]. We report the Mean and Median Angle distance (in radians) and the percentage of valid pixels being with 11.25, 22.5, and 30 degrees of the ground-truth.

We report multiple results for the single-task training case, obtained by setting values to the weight of the loss term in Eq. 3. We observe that this can have a quite substantial impact on performance. When setting a large weight we can directly compete with the current state-of-the-art, while a low weight can reduce performance substantially. As we will see however in the following subsection, it becomes necessary to set a reasonably low weight, or else this may have adverse effects on the performance of the remaining tasks. When using that lower weight, we witness a further drop in performance for the multi-task case.

Even though our multi-task network’s performance is not too different from the plain CNN-based result of [103], it is clear that there we have a somewhat unique gap in performance, when compared to what we seen in the remaining tasks.

Our conjencture is that this may be due to the geometric, and continuous nature of this task, which is quite different from the remaining labelling tasks. It may be that both the intermediate and final features of the VGG network are not appropriate for this task ‘out-of-the-box’, and it takes substantially large-scale modifications to the inner workings of the network (corresponding to a large weight on the task-specific loss) until the nonlinearities within the VGG network can accommodate the task. It is however interesting that both competing methods (VGG-MLP [2], VGG-cascade [25]

) address the task by using additional layers on top of the VGG network (a Multi-Layer Perceptron in

[2], a coarse-to-fine cascade in [25]). Even though in this work we have limited ourselves to using linear functions on top of the skip layers for the sake of simplicity and efficiency, these successes suggest that adding instead nonlinearities could be a way of improving performance for this task, as well as potentially also for other tasks.

Detection Boundaries Saliency Parts Surface Normals S. Boundaries S. Segmentation
77.8 0.785 0.805 0.837 0.822 48.8 24.2 52.0 65.9 44.3 48.2 68.7
76.2 0.779 0.805 0.836 0.820 36.7 23.1 51.0 64.9 33.6 34.2 67.2
73.5 0.772 0.802 0.830 0.814 34.2 27.7 57.3 70.2 28.6 33.2 63.5
Table 12: Impact of the weight used for the normal estimation loss, when training for 7-tasks: Improving normal estimation comes at the cost of decreasing performance in the remaining tasks (higher is better for all tasks).

5.3 Effect of task weights

The performance of our network on the multitude of task it adresses depends on the weights assigned to the losses of different tasks in Eq. 3. If the weight of one task is substantially larger, one can expect that this will skew the internal representation of the network in favor of the particular task, while negecting others.

Motivated by the empirical results in the previous paragraphs we have explored the impact of modifying the weight attributed to the normal estimation task in Eq. 3 in the case of solving multiple, rather than individual tasks. In Table. 12 we report how performance changes when we increase the weight of the normal estimation task (previous experiments relied on the option).

We realize that, at least for our particular experimental settings, there is ‘no free lunch’, and the performance measures of the different tasks act like communicating vessels. The evaluation may arguably be affected by our optimization choices; using e.g. larger batch sizes, or more iterations and a polynomial schedule as in [14] could help. But the present results indicate that the common CNN trunk has apparently a bounded learning capacity, and suggests that inserting more parameters, potentially through additional nonlinear layers on top of the skip layers, may be needed to maintain high performance across all tasks. We will explore these directions in the future, as well as whether this effect persists when working with deeper networks such as ResNets.

6 Conclusions and Future Work

In this work we have introduced two techniques that allow us to train a CNN that tackles a broad set of computer vision problems in a unified architecture. We have shown that one can effectively scale up to many and diverse tasks, since the memory complexity is independent of the number of tasks, and incoherently annotated datasets can be combined during training.

There are certain straightforward directions for future work: (i) considering more tasks, such as symmetry, human landmarks, texture segmentation, or any other of the tasks indicated in the introduction (ii) using deeper architectures, such as ResNets [40] (iii) combining the dense labelling results with structured prediction [61, 14, 113, 11]. Research in these directions is underway, but, more importantly, we consider this work to be a first step in the direction of jointly tackling multiple tasks by exploiting the synergy between them - this has been a recurring theme in computer vision, e.g. for integrating segmentation and recognition [47, 72, 9, 99, 50, 53, 68], and we believe that successfully addressing this it is imperative to have a single network that can succesfully handle all of the involved tasks. The code for this work will soon be made publicly available from http://cvn.ecp.fr/ubernet/.

7 Acknowledgements

This work has been supported by the FP7-RECONFIG, FP7-MOBOT, and H2020-ISUPPORT EU projects, and equipment donated by NVIDIA. I thank George Papandreou for pointing out how low-memory backpropagation can be implemented, Pierre-André Savalle for showing me how to handle prototxt files, Ross Girshick for making the Faster-RCNN system publicly available, and Nikos Paragios for creating the environment where this work took place.


  • [1] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. PAMI, 2011.
  • [2] A. Bansal, B. Russell, and A. Gupta. Marr revisited: 2d-3d alignment via surface normal prediction. In Proc. CVPR, 2016.
  • [3] V. Belagiannis and A. Zisserman. Recurrent human pose estimation. CoRR, abs/1605.02914, 2016.
  • [4] S. Bell, P. Upchurch, N. Snavely, and K. Bala. Material recognition in the wild with the materials in context database. In Proc. CVPR, 2015.
  • [5] S. Bell, C. L. Zitnick, K. Bala, and R. Girshick.

    Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks.

    Proc. CVPR, 2016.
  • [6] G. Bertasius, J. Shi, and L. Torresani. Deepedge: A multi-scale bifurcated deep network for top-down contour detection. In Proc. CVPR, 2015.
  • [7] G. Bertasius, J. Shi, and L. Torresani. High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision. In Proc. ICCV, 2015.
  • [8] H. Bilen and A. Vedaldi. Integrated perception with recurrent multi-task neural networks. In Proc. NIPS, 2016.
  • [9] L. Bottou, Y. Bengio, and Y. LeCun.

    Global training of document processing systems using graph transformer networks.

    In Proc. CVPR, 1997.
  • [10] L. D. Bourdev, S. Maji, and J. Malik. Describing people: A poselet-based approach to attribute classification. In Proc. ICCV, 2011.
  • [11] S. Chandra and I. Kokkinos. Fast, exact and multi-scale inference for semantic image segmentation with deep gaussian crfs. In Proc. ECCV, 2016.
  • [12] L. Chen, J. T. Barron, G. Papandreou, K. Murphy, and A. L. Yuille. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. In Proc. CVPR, 2016.
  • [13] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In Proc. ICLR, 2015.
  • [14] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR, abs/1606.00915, 2016.
  • [15] L. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. Attention to scale: Scale-aware semantic image segmentation. In Proc. CVPR, 2015.
  • [16] T. Chen, B. Xu, C. Zhang, and C. Guestrin. Training deep nets with sublinear memory cost. CoRR, abs/1604.06174, 2016.
  • [17] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In Proc. CVPR, 2014.
  • [18] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu. Global contrast-based salient region detection. PAMI, 2015.
  • [19] M. Cimpoi, S. Maji, I. Kokkinos, and A. Vedaldi. Deep filter banks for texture recognition, description, and segmentation. IJCV, 2016.
  • [20] J. Dai, K. He, and J. Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In Proc. ICCV, 2015.
  • [21] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation via multi-task network cascades. In Proc. CVPR, 2016.
  • [22] J. Dai, Y. Li, K. He, and J. Sun. R-FCN: object detection via region-based fully convolutional networks. In Proc. NIPS, 2016.
  • [23] P. Dollár and C. L. Zitnick. Fast edge detection using structured forests. PAMI, 2015.
  • [24] C. Dong, C. C. Loy, K. He, and X. Tang.

    Image super-resolution using deep convolutional networks.

    PAMI, 2016.
  • [25] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proc. ICCV, 2015.
  • [26] M. Everingham, S. M. A. Eslami, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 2015.
  • [27] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. PAMI, 2013.
  • [28] Y. Ganin and V. Lempitsky. N^ 4-fields: Neural network nearest neighbor fields for image transforms. In Proc. ACCV, 2014.
  • [29] G. Ghiasi and C. C. Fowlkes. Laplacian reconstruction and refinement for semantic segmentation. In Proc. ECCV, 2016.
  • [30] S. Gidaris and N. Komodakis. Attend refine repeat: Active box proposal generation via in-out localization. CoRR, abs/1606.04446, 2016.
  • [31] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
  • [32] R. B. Girshick. Fast R-CNN. In Proc. ICCV, 2015.
  • [33] G. Gkioxari, R. B. Girshick, and J. Malik. Contextual action recognition with r*cnn. In Proc. ICCV, 2015.
  • [34] A. Gruslys, R. Munos, I. Danihelka, M. Lanctot, and A. Graves. Memory-efficient backpropagation through time. CoRR, abs/1606.03401, 2016.
  • [35] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. Matchnet: Unifying feature and metric learning for patch-based matching. In Proc. CVPR, 2015.
  • [36] B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In Proc. ICCV, 2011.
  • [37] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In Proc. ECCV, 2014.
  • [38] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Proc. CVPR, 2015.
  • [39] A. W. Harley, K. G. Derpanis, and I. Kokkinos. Learning dense convolutional embeddings for semantic segmentation. CoRR, abs/1511.04377, 2015.
  • [40] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016.
  • [41] J.-J. Hwang and T.-L. Liu. Pixel-wise deep learning for contour detection. In Proc. ICLR, 2015.
  • [42] E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. CoRR, abs/1605.03170, 2016.
  • [43] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, 2015.
  • [44] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM, 2014.
  • [45] P. Jiang, H. Ling, J. Yu, and J. Peng. Salient region detection by UFO: uniqueness, focusness and objectness. In Proc. ICCV, 2013.
  • [46] A. Kanazawa, A. Sharma, and D. W. Jacobs. Locally scale-invariant convolutional neural networks. CoRR, abs/1412.5104, 2014.
  • [47] J. D. Keeler, D. E. Rumelhart, and W. K. Leow. Integrated segmentation and recognition of hand-printed numerals. In Proc. NIPS, 1990.
  • [48] J. J. Kivinen, C. K. I. Williams, and N. Heess. Visual boundary prediction: A deep neural prediction network and quality dissection. In AISTATS, 2014.
  • [49] I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. ICLR, 2016.
  • [50] I. Kokkinos and P. Maragos.

    An expectation maximization approach to the synergy between image segmentation and object categorization.

    In Proc. ICCV, volume I, pages 617–624, 2005.
  • [51] P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011.
  • [52] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2013.
  • [53] M. P. Kumar, P. Torr, and A. Zisserman. Obj-cut. Proc. CVPR, 2005.
  • [54] L. Ladicky, B. Zeisl, and M. Pollefeys. Discriminatively trained dense surface normal estimation. In Proc. ECCV, 2014.
  • [55] G. Larsson, M. Maire, and G. Shakhnarovich. Learning representations for automatic colorization. CoRR, abs/1603.06668, 2016.
  • [56] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proc. IEEE, 1998.
  • [57] G. Li and Y. Yu.

    Visual saliency based on multiscale deep features.

    In Proc. CVPR, 2015.
  • [58] G. Li and Y. Yu. Deep contrast learning for salient object detection. In Proc. CVPR, 2016.
  • [59] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille. The secrets of salient object segmentation. In Proc. CVPR, 2014.
  • [60] X. Liang, X. Shen, J. Feng, L. Lin, and S. Yan. Semantic object parsing with graph LSTM. In Proc. CVPR, 2016.
  • [61] G. Lin, C. Shen, I. D. Reid, and A. van den Hengel. Efficient piecewise training of deep structured models for semantic segmentation. CVPR, 2016.
  • [62] T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: common objects in context. In Proc. ECCV, 2014.
  • [63] F. Liu, C. Shen, and G. Lin. Deep convolutional neural fields for depth estimation from a single image. In Proc. CVPR, 2015.
  • [64] F. Liu, C. Shen, G. Lin, and I. D. Reid. Learning depth from single monocular images using deep convolutional neural fields. CoRR, abs/1502.07411, 2015.
  • [65] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. E. Reed. SSD: single shot multibox detector. CoRR, abs/1512.02325, 2015.
  • [66] W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. CoRR, abs/1506.04579, 2015.
  • [67] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proc. CVPR, 2015.
  • [68] M. Maire, S. X. Yu, and P. Perona. Object detection and segmentation from joint embedding of parts and pixels. In ICCV, 2011.
  • [69] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. ICCV, 2001.
  • [70] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stitch networks for multi-task learning. In Proc. CVPR, 2016.
  • [71] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In Proc. CVPR, 2014.
  • [72] D. Mumford. Neuronal Architectures for Pattern Theoretic Problems. In Large Scale Theories of the Cortex. MIT Press, 1994.
  • [73] T. Narihira, M. Maire, and S. X. Yu. Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. In Proc. ICCV, 2015.
  • [74] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
  • [75] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. CoRR, abs/1603.06937, 2016.
  • [76] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proc. ICCV, 2015.
  • [77] M. Oquab, L. Bottou, I. Laptev, and J. Sivic.

    Is object localization for free? - weakly-supervised learning with convolutional neural networks.

    In Proc. CVPR, 2015.
  • [78] G. Papandreou, L. Chen, K. Murphy, and A. L. Yuille.

    Weakly- and semi-supervised learning of a DCNN for semantic image segmentation.

    In Proc. ICCV, 2015.
  • [79] G. Papandreou, I. Kokkinos, and P. Savalle. Modeling local and global deformations in deep learning: Epitomic convolution, multiple instance learning, and sliding window detection. In Proc. CVPR, 2015.
  • [80] F. Perazzi, P. Krähenbühl, Y. Pritch, and A. Hornung. Saliency filters: Contrast based filtering for salient region detection. In Proc. CVPR, 2012.
  • [81] T. Pfister, J. Charles, and A. Zisserman. Flowing convnets for human pose estimation in videos. In Proc. ICCV, 2015.
  • [82] P. H. O. Pinheiro, R. Collobert, and P. Dollár. Learning to segment object candidates. In Proc. NIPS, 2015.
  • [83] Y. Qin, H. Lu, Y. Xu, and H. Wang. Saliency detection via cellular automata. In Proc. CVPR, 2015.
  • [84] R. Ranjan, V. M. Patel, and R. Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. CoRR, abs/1603.01249, 2016.
  • [85] S. Ren, K. He, R. B. Girshick, and J. Sun. Faster R-CNN: towards real-time object detection with region proposal networks. In Proc. NIPS, 2015.
  • [86] X. Ren and L. Bo. Discriminatively trained sparse code gradients for contour detection. In Proc. NIPS, 2012.
  • [87] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI, 2015.
  • [88] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
  • [89] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In Proc. ICLR, 2014.
  • [90] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang. Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection. In Proc. CVPR, 2015.
  • [91] W. Shen, K. Zhao, Y. Jiang, Y. Wang, Z. Zhang, and X. Bai. Object skeleton extraction in natural images by fusing scale-associated deep side outputs. In Proc. CVPR, 2016.
  • [92] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. Proc. ICCV, 2015.
  • [93] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2015.
  • [94] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. CVPR, 2015.
  • [95] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013.
  • [96] A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In Proc. CVPR, 2015.
  • [97] S. Tsogkas and I. Kokkinos. Learning-based symmetry detection in natural images. In Proc. ECCV, 2012.
  • [98] S. Tsogkas, I. Kokkinos, G. Papandreou, and A. Vedaldi. Deep learning for semantic part segmentation with high-level guidance. CoRR, abs/1505.02438, 2015.
  • [99] Z. W. Tu, X. Chen, A. Yuille, and S. C. Zhu. Image Parsing: Unifying Segmentation, Detection, and Recognition. In Proc. ICCV, 2003.
  • [100] J. R. R. Uijlings and V. Ferrari. Situational object boundary detection. In Proc. CVPR, 2015.
  • [101] K. Wang, L. Lin, J. Lu, C. Li, and K. Shi. PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. IEEE Trans. Im. Proc., 2015.
  • [102] L. Wang, H. Lu, X. Ruan, and M. Yang. Deep networks for saliency detection via local estimation and global search. In Proc. CVPR, 2015.
  • [103] X. Wang, D. F. Fouhey, and A. Gupta. Designing deep networks for surface normal estimation. In Proc. CVPR, 2015.
  • [104] S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In Proc. CVPR, 2016.
  • [105] A. P. Witkin. Scale-space filtering. In IJCAI, 1983.
  • [106] F. Xia, P. Wang, L. Chen, and A. L. Yuille. Zoom better to see clearer: Human part segmentation with auto zoom net. In Proc. ECCV, 2016.
  • [107] S. Xie and Z. Tu. Holistically-nested edge detection. In Proc. ICCV, 2015.
  • [108] T. F. Yago Vicente, M. Hoai, and D. Samaras. Noisy label recovery for shadow detection in unfamiliar domains. In Proc. CVPR, 2016.
  • [109] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. Lift: Learned invariant feature transform. In Proc. ECCV, 2016.
  • [110] S. Zagoruyko and N. Komodakis. Learning to compare image patches via convolutional neural networks. In Proc. CVPR, 2015.
  • [111] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In Proc. ECCV, 2014.
  • [112] R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In Proc. CVPR, 2015.
  • [113] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random fields as recurrent neural networks. In Proc. ICCV, 2015.