Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data

01/09/2020 ∙ by Xi Yan, et al. ∙ UNIVERSITY OF TORONTO 34

Transfer learning has proven to be a successful technique to train deep learning models in the domains where little training data is available. The dominant approach is to pretrain a model on a large generic dataset such as ImageNet and finetune its weights on the target domain. However, in the new era of an ever-increasing number of massive datasets, selecting the relevant data for pretraining is a critical issue. We introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain. Our NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client, an end-user with a target application with its own small labeled dataset. As in any search engine that serves information to possibly numerous users, we want the online computation performed by the dataserver to be minimal. The dataserver represents large datasets with a much more compact mixture-of experts model, and employs it to perform data search in a series of dataserver-client transactions at a low computational cost. We show the effectiveness of NDS in various transfer learning scenarios, demonstrating state-of-the-art performance on several target datasets and tasks such as image classification, object detection and instance segmentation. Our Neural Data Server is available as a web-service at, recommending data to users with the aim to improve performance of their A.I. application.



There are no comments yet.


page 2

page 8

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, we have seen an explosive growth of the number and the variety of computer vision applications. These range from generic image classification tasks to surveillance, sports analytics, clothing recommendation, early disease detection, and to mapping, among others. Yet, we are only at the beginning of our exploration of what is possible to achieve with Deep Learning.

Figure 1: Neural Data Server: Search engine for finding relevant transfer learning data for the user’s target domain. In NDS, a dataserver indexes several popular image datasets, represents them with a mixture-of-experts model, and uses client’s target data to determine most relevant samples. Note that NDS indexes available public datasets and does not host them. Data recommendation is done by providing links to relevant examples.

One of the critical components of the new age of computer vision applications is the need for labeled data. To achieve high performance, typically a massive amount of data needs to be used to train deep learning models. Transfer learning provides a promising approach to reduce the need for large-scale labeled data for each target application. In transfer learning, a neural network is pretrained 

[10, 24, 43] on existing large generic datasets and then fine-tuned in the target domain. While transfer learning is a well studied concept that has been proven successful in many applications [10, 24, 43], deciding which data to use for pretraining the model is an open research question that has received surprisingly little attention in the literature. We argue that this is a crucial problem to be answered in light of the ever increasing scale of the available datasets.

To emphasize our point, the website of curated computer vision benchmarks111Website listing CV datasets: currently lists 367 public datasets, ranging from generic imagery, faces, fashion photos, to self-driving data. Furthermore, the dataset sizes are significantly increasing: the recently released OpenImages  [31] contains 9M labeled images (600GB in size), and is 20 times larger compared to its predecessor MS-COCO  [33] (330K images, 30GB). The video benchmark YouTube8m  [1] (1.9B frames, 1.5TB), is 800 larger compared to Davis  [8] (10k frames, 1.8GB), while the recently released autonomous driving dataset nuScenes [9] contains 100 the number of frames than KITTI [19] which was released in 2012.

Figure 2: Examples of images from the dataserver (COCO+OpenImages) recommended to each client dataset by our Neural Data Server.

It is evident that downloading and storing these datasets locally is already cumbersome and expensive. This is further amplified by the computational resources required for training neural networks on this massive amount of data. The latter is an even more pronounced issue in research, where the network architectures are continuously being developed and possibly many need to be tested. Furthermore, for commercial applications, data licensing may be another financial issue to consider. Recent works [23, 37] have also shown that there is not a “the more the better” relationship between the amount of pretraining data and the downstream task performance. Instead, they showed that selecting an appropriate subset of the data was important to achieve good performance on the target dataset.

In this paper, we introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain. One can imagine NDS as a web-service where a centralized server, referred to as the dataserver, recommends data to clients (Fig 1). A client is an end-user with an A.I. application in mind, and has a small set of labeled target data. We assume that each client is only interested in downloading a subset of the server-indexed data that is most relevant to the client’s target domain, limited to the user-specified budget (maximum desired size). We further require the transaction between the dataserver and the client to be both computationally efficient and privacy-preserving. This means the client’s data should not be visible to the server. We also aim to minimize the amount of dataserver’s online computation per client, as this may possibly serve many clients in parallel.

We index several popular image datasets and represent them using a mixture-of-experts (MoE) model, which we store on the dataserver. MoE is significantly smaller in size than the data itself, and is used to probe the usefulness of data in the client’s target domain. In particular, we determine the accuracy of each expert on the target dataset, and recommend data to the client based on these accuracies.

We experimentally show significant performance improvements on several downstream tasks and domains compared to baselines. Furthermore, we show that with only 20% of pretraining data, our method achieves comparable or better performance than pretraining on the entire dataserver-indexed datasets. We obtain significant improvements over ImageNet pretraining by downloading only Gb of server’s data in cases when training on the entire dataserver ( Gb) would take weeks. Our Neural Data Server will be made available as a web-service with the aim of improving performance and reducing the development cost of the end-users’ A.I. applications.

Figure 3: Overview of Neural Data Server. NDS consists of a dataserver that represents indexed datasets using a mixture-of-experts model. Experts are sent to client in order to compute accuracies in the client’s target domain. These accuracies are then used by dataserver to recommend relevant data samples.
1:Require representation learning alg. , number of experts
2: HardGating() Sec 3.2: partition into local subsets to obtain gating
4:procedure MoE():
5:        For
6:               Run on to obtain expert
7:        return
8:procedure OutputData():
11:        Sample from at rate according to
12:        return
Algorithm 1 Dataserver’s Modules
Algorithm 2 Overview of our Neural Data Server 1:Input: (source), (target), (desired budget of data) 2:         MoE() 3:         FastAdapt() 4:         OutputData() 5:        return 6:Output: to download Algorithm 3 Client’s Module 1:procedure FastAdapt(): 2:        For 3:                Performance/FineT() Sec 3.3.1 4:        return

2 Related Work

Transfer Learning. The success of deep learning and the difficulty of collecting large scale datasets has recently brought significant attention to the long existing history of transfer learning, cross-domain annotation and domain adaptation  [39, 14, 3, 44, 2, 46]. Specifically in the context of neural networks, fine-tuning a pretrained model on a new dataset is the most common strategy for knowledge transfer.

Most literature in this domain analyzes the effect of pretraining on large-scale datasets [44, 34, 16] with respect to network architectures, network layers, and training tasks [49, 50]. Works most related to ours are [15, 37] which show that pretraining on only relevant examples is important to achieve good performance on fine-grained classification tasks. Specifically, in [15] the authors use a predefined similarity metric between the source and target categories in order to greedily select the most similar categories from the source dataset to be used for pretraining. [37], on the other hand, exploits a model pretrained on the source domain to obtain pseudolabels of the target images, and uses these to re-weight the source examples.

Unlike ours, [15, 37] are limited to classification tasks, and do not easily scale to a constantly growing datacenter (the model needs to be retrained each time a new dataset is added). Thus, their approach does not naturally handle our scenario in which indexed datasets have diverse sets of tasks and labels, and where the number of indexed datasets may grow over time.

Federated Learning.  [35, 7] introduce a distributed ML approach with the goal of training a centralized model on decentralized data over a large number of client devices, (i.e., mobile phones). Our work shares a similar idea of restricting the visibility of data in a client-server model. However, in our case the representation of data is centralized (dataserver) and the clients exploit the transfer learning scenario for their own (decentralized) models.

Active and Curriculum Learning.

In active learning 

[42] one searches over unlabeled data to find optimal samples to be labeled by an oracle, while in curriculum learning [6] subsets of data of increasing difficulty are sought for during training. In both scenarios, data search is performed at each iteration of training a particular model. Search is typically done by running inference on the data samples with the current snapshot of the model and selecting the examples based on uncertainty-based metrics. Our scenario differs in that we do not have the luxury of running inference with the client’s model on the massive amount of indexed data as this would induce a prohibitive computational overhead on the dataserver per client. Moreover, we do not assume dataserver to have access to the client’s model: this would entail the clients to share their inference code which many users may not be willing to do.

Learning to Generate Synthetic Images. Related to NDS are also [41, 28, 47, 36]. These approaches aim to bridge the synthetic vs real imagery gap by optimizing/searching over the set of parameters of a surrogate function that interfaces with a synthesizer.

In NDS, the search has to be done over massive (non-parametric) datasets and further, the target data cannot be sent to the server side. Our method is also significantly more computationally efficient.

3 Neural Data Server

Neural Data Server (NDS) is a search engine that aims to recommend transfer learning data. NDS consists of a dataserver which has access to a massive source dataset(s), and aims to suggest most relevant data samples to a client. A client is an end-user who wants a budget-constrained amount of data to improve the performance of her/his model in the target domain in a transfer learning scenario. We note that the dataserver does not host the data, and thus its recommendations are to be provided as a list of urls to data samples hosted by the original datasets’ providers.

The dataserver’s indexed datasets may or may not be completely labeled, and the types of labels (e.g., segmentation masks, detection boxes) across data samples may vary. The client’s target dataset is considered to only have a small set of labeled examples, where further the type of labels may or may not be the same as the labels in the dataserver’s dataset(s). The main challenge lies in requiring the dataserver-client transactions to have low computational overhead. As in any search engine that serves information to possibly numerous users, we want the online computation performed by the dataserver to be minimal. Thus we defer most of the computation to be performed on the client’s side, while still aiming for this process to be fast. Furthermore, the transactions should ideally be privacy preserving for the client, i.e., the client’s data nor the model’s architecture are accessible, since the client may have sensitive information such as hospital records or secret tech. In NDS, we represent the dataserver

’s data using a mixture-of-experts (MoE) trained on a self-supervised task. MoE naturally partition the indexed datasets into different subsets and produce classifiers whose weights encode the representation of each of these subsets. The experts are trained offline and hosted on the

dataserver for online transactions with the clients. In particular, the experts are sent to each client and used as a proxy to determine the importance of dataserver’s data samples for the client’s target domain.

To compute importance, the experts are fast-adapted on the client’s dataset, and their accuracy is computed on a simple self-supervised task. We experimentally validate that the accuracy of each adapted expert indicates the usefulness of the data partition used to train the expert. The dataserver then uses these accuracies to construct the final list of data samples that are relevant for the client. Figure 3 provides an illustration while Algorithm 2 summarizes our NDS.

In Section 3.1 we formalize our problem. In Section 3.2 we describe how we train our mixture-of-experts model and analyze the different choices of representation learning algorithms for the experts (dataserver side). In Section 3.3.1 we propose how to exploit the experts’ performance in the client’s target domain for data selection.

3.1 Problem Definition

Let denote the input space (images in this paper), and a set of labels for a given task . Generally, we will assume that multiple tasks are available, each associated with a different set of labels, and denote these by . Consider also two different distributions over , called the source domain and target domain . Let (dataserver) and (client) be two sample sets drawn i.i.d from and , respectively.

We assume that .

Our problem then relies on finding the subset , where is the power set of , such that minimizes the risk of a model on the target domain:


Here, indicates that is trained on the union of data and . Intuitively, we are trying to find the subset of data from that helps to improve the performance of the model on the target dataset. However, what makes our problem particularly challenging and unique is that we are restricting the visibility of the data between the dataserver and the client.

This means that fetching the whole sample set is prohibitive for the client, as it is uploading its own dataset to the server. We tackle this problem by representing the dataserver’s indexed dataset(s) with a set of classifiers that are agnostic of the client (Section 3.2), and use these to optimize equation 1 on the client’s side (Section 3.3.1).

3.2 Dataserver

We now discuss our representation of the dataserver’s indexed datasets. This representation is pre-computed offline and stored on the dataserver.

3.2.1 Dataset Representation with Mixture-of-Experts

We represent the dataserver’s data using the mixture-of-experts model [27]. In MoE, one makes a prediction as:


Here, denotes a gating function (), denotes the -th expert model with learnable weights , an input image, and corresponds to the number of experts. One can think of the gating function as softly assigning data points to each of the experts, which try to make the best guess on their assigned data points.

The MoE model is trained by using maximum-likelihood estimation (MLE) on an objective



We discuss the choices for the objective in Sec 3.2.2, dealing with the fact that the labels across the source datasets may be defined for different tasks.

While the MoE objective allows end-to-end training, the computational cost of doing so on a massive dataset is extremely high, particularly when

is considerably large (we need to backpropagate gradients to every expert on every training example). A straightforward way to alleviate this issue is to associate each expert with a local cluster defined by a hard gating, as in 

[26, 21]. In practice, we define a gating function that partitions the dataset into mutually exclusive subsets , and train one expert per subset. This makes training easy to parallelize as each expert is trained independently on its subset of data. Furthermore, this allows for new datasets to be easily added to the dataserver by training additional experts on them, and adding these to dataserver. This avoids re-training MoE over the full indexed set of datasets.

In our work, we use two simple partitioning schemes to determine the gating: (1) superclass partition, and (2) unsupervised partition. For superclass partition (1), we represent each class in the source dataset as the mean of the image features for category , and perform -means clustering over . This gives a partitioning where each cluster is a superclass containing a subset of similar categories. This partitioning scheme only applies to datasets with class supervision. For unsupervised partitioning (2), we partition the source dataset using -means clustering on the image features. In both cases, the image features are obtained from a pretrained neural network (i.e

., features extracted from the penultimate layer of a network pre-trained on ImageNet).

3.2.2 Training the Experts

We discuss two different scenarios to train the experts. In the simplified scenario, the tasks defined for both the dataserver’s and client’s datasets are the same, e.g., classification. In this case, we simply train a classifier for the task for each subset of the data in . We next discuss a more challenging case where the tasks across datasets differ.

Ideally, we would like to learn a representation that can generalize to a variety of downstream tasks and can therefore be used in a task agnostic fashion. To this end, we use a self-supervised method to train the MoE. In self-supervision, one leverages a simple surrogate task that can be used to learn a meaningful representation.

Furthermore, this does not require any labels to train the experts which means that the dataserver’s dataset may or may not be labeled beforehand. This is useful if the client desires to obtain raw data and label the relevant subset on its own. To be specific, we select classifying image rotation as the task for self-supervision as in [20], which showed this to be a simple yet powerful proxy for representation learning. Formally, given an image , we define its corresponding self-supervised label by performing a set of geometric transformations on , where is an image rotation operator, and defines a particular rotation by one of the predefined angles, . We then minimize the following learning objective for the experts:


Here, index in denotes the output value for class .

3.3 Dataserver-Client Transactions

In this section, we describe the transactions between the dataserver and client that determines the relevant subset of the server’s data. The client first downloads the experts in order to measure their performance on the client’s dataset. If the tasks are similar, we perform a quick adaptation of the experts on the client’s side. Otherwise, we evaluate the performance of the experts on the client’s data using the surrogate task (i.e image rotation) (Section 3.3.1). The performance of each expert is sent back to the dataserver, which uses this information as a proxy to determine which data points are relevant to the client (Section 3.3.2). We describe these steps in more detail in the following subsections.

3.3.1 FastAdapt to a Target Dataset (on Client)

Single Task on Server and Client:

We first discuss the case where the dataset task is the same for both the client and the dataserver, e.g., classification. While the task may be the same, the label set may not be (classes may differ across domains). An intuitive way to adapt the experts is to remove their classification head that was trained on the server, and learn a small decoder network on top of the experts’s penultimate representations on the client’s dataset, as in [50]

. For classification tasks, we learn a simple linear layer on top of each pre-trained expert’s representation for a few epochs. We then evaluate the target’s task performance on a held-out validation set using the adapted experts. We denote the accuracy for each adapted expert

as .

Diverse Tasks on Server and Client:

To generalize to unseen tasks and be further able to handle cases where the labels are not available on the client’s side, we propose to evaluate the performance of the common self-supervised task used to train the experts on the dataserver’s data. Intuitively, if the expert performs well on the self-supervised task on the target dataset, then the data it was trained on is likely relevant for the client. Specifically, we use the self-supervised experts trained to learn image rotation, and evaluate the proxy task performance (accuracy) of predicting image rotation angles on the target images:


Here, index in denotes the output value for class .

Note that in this case we do not adapt the experts on the target dataset (we only perform inference).

3.3.2 Data Selection (on Dataserver)

We now aim to assign a weighting to each of the data points in the source domain to reflect how well the source data contributed to the transfer learning performance. The accuracies from the client’s FastAdapt step are normalized to and fed into a softmax function with temperature . These are then used as importance weights for estimating how relevant is the representation learned by a particular expert for the target task’s performance. We leverage this information to weigh the individual data points . More specifically, each source data is assigned a probabilistic weighting:


Here, represents the size of the subset that an expert was trained on. Intuitively, we are weighting the set of images associated to the -th expert and uniformly sampling from it. We construct our dataset by sampling examples from at a rate according to .

3.4 Relation to Domain Adaptation

If we assume that the client and server tasks are the same then our problem can be interpreted as domain adaptation in each of the subset and the following generalization bound from  [4] can be used:


where represents the risk of a hypothesis function and is the divergence  [4], which relies on the capacity of to distinguish between data points from and , respectively.

Let us further assume that the risk of the hypothesis function on any subset is similar such that: . Under this assumption, minimizing equation 1 is equivalent to finding the subset that minimizes the divergence with respect to . Formally,


In practice, it is hard to compute and this is often approximated by a proxy -distance  [5, 11, 18]. A classifier that discriminates between the two domains and whose risk is used to approximate the second part of the equation 7.


Note that doing so would require having access to and in at least one of the two sides (i.e to train the new discriminative classifier) and this is prohibitive in our scenario. In our case, we compute the domain confusion between and by evaluating the performance of expert on the target domain. We argue that this proxy task performance (or error rate) is an appropriate proxy distance that serves the same purpose but does not violate the data visibility condition. Intuitively, if the features learned on the subset cannot be discriminated from features on the target domain, the domain confusion is maximized. We empirically show the correlation between the domain classifier and our proposed proxy task performance in our experiments.

4 Experiments

Pretrain Server Data (COCO + OpenImages) Client Dataset
Sampled Data Size Method PASCAL-VOC2007 miniModaNet Cityscapes
File Size # Images
ImageNet Initialization 44.30 73.66 46.44 33.40 57.98 35.00 34.94 59.86 35.69
26GB / 538GB 90K (5%) Uniform Sampling 47.61 76.88 51.95 35.64 58.40 39.09 36.49 61.88 36.36
NDS 48.36 76.91 52.53 38.84 61.23 43.86 38.46 63.79 39.59
54GB / 538GB 180K (10%) Uniform Sampling 48.05 77.17 52.04 35.78 58.50 39.71 36.41 61.22 37.17
NDS 50.28 78.61 55.47 38.97 61.32 42.93 40.07 65.85 41.14
Table 1: Results for object detection on the 3 client datasets. Scores are measured in %.
Data (# Images) Method
0 ImageNet Initial. 36.2 62.3 32.0 57.6
23K Uniform Sampling 38.1 64.9 34.3 60.0
NDS 40.7 66.0 36.1 61.0
47K Uniform Sampling 39.8 65.5 34.4 60.0
NDS 42.2 68.1 36.7 62.3
59K Uniform Sampling 39.5 64.9 34.9 60.4
NDS 41.7 66.6 36.7 61.9
118K Full COCO 41.8 66.5 36.5 62.3
Table 2: Transfer learning results for instance segmentation with Mask R-CNN on Cityscapes by selecting images from COCO.
Data (# Images) Method
0 ImageNet Initial. 36.2 62.3 32.0 57.6
118K Uniform Sampling 37.5 62.5 32.8 57.2
NDS 39.9 65.1 35.1 59.8
200K Uniform Sampling 37.8 63.1 32.9 57.8
NDS 40.7 65.8 36.1 61.2
Table 3: Transfer learning results for instance segmentation with Mask R-CNN on Cityscapes by selecting images from OpenImages.
Pretrain. Sel. Method Target Dataset
Stanf. Dogs Stanf. Cars Oxford-IIIT Pets Flowers 102 CUB200 Birds
0% Random Init. 23.66 18.60 32.35 48.02 25.06
100% Entire Dataset 64.66 52.92 79.12 84.14 56.99
20% Uniform Sample 52.84 42.26 71.11 79.87 48.62
NDS (SP+TS) 72.21 44.40 81.41 81.75 54.00
NDS (SP+SS) 73.46 44.53 82.04 81.62 54.75
NDS (UP+SS) 66.97 44.15 79.20 80.74 52.66
40% Uniform Sample 59.43 47.18 75.96 82.58 52.74
NDS (SP+TS) 68.66 50.67 80.76 83.31 58.84
NDS (SP+SS) 69.97 51.40 81.52 83.27 57.25
NDS (UP+SS) 67.16 49.52 79.69 83.51 57.44
Table 4: Ablation experiments on gating and expert training. SP=Superclass Partition, UP=Unsupervised Partition, TS=Task-Specific experts (experts trained on classif. labels), and SS=Self-Supervised experts (experts trained to predict image rotation).
Data Method Oxford-IIIT Pet CUB200 Birds
20% Uniform Samp. 71.1 48.6
KNN +  [15] 74.4 51.6
 [37] 81.3 54.3
NDS 82.0 54.8
40% Uniform Samp. 76.0 52.7
KNN +  [15] 78.1 56.1
 [37] 81.0 57.4
NDS 81.5 57.3
Entire ImageNet 79.1 57.0
Table 5: Transfer learning performance on classification datasets comparing data selection methods.
Figure 4: Relationship between domain classifier and proxy task performance on subsets .
Figure 4: Relationship between domain classifier and proxy task performance on subsets .
Figure 5: Instance segmentation results on Cityscapes using network pre-trained from ImageNet initialization (left), 47K images uniformly sampled (middle), and 47K images from NDS (right). Notice that the output segmentations generally look cleaner when training on NDS-recommended data.
Figure 6: Object detection results in miniModaNet using network pre-trained from ImageNet initialization (left), 90K images uniformly sampled (middle), and 90K images sampled using NDS (right). A score threshold of 0.6 is used to display these images.

We perform experiments in the tasks of classification, detection, and instance segmentation. We experiment with 3 datasets on the sever side and 7 on the client side.

4.1 Support for Diverse Clients and Tasks

In this section, we provide an extensive evaluation of our approach on three different client’s scenarios: autonomous driving, fashion and general scenes. In each of them, the client’s goal is to improve the performance of its downstream task (i.e., object detection or instance segmentation) by pretraining in a budget-constrained amount of data. Here, the dataserver is the same and indexes the massive OpenImages [31] and MS-COCO  [33] datasets. Specifically, our server dataset can be seen as the union of COCO and OpenImages  [31, 33] (approx GB) represented in the weights of the self-supervised trained experts ( GB).

Autonomous Driving: Here, we use Cityscapes [13] as the client’s dataset, which contains finely annotated images divided into training and validation images. Eight object classes are provided with per-instance annotation. In practice, this simulates the scenario of a client that wants to crunch its performance numbers by pretraining on some data. This scenario is ubiquitous among state-of-the-art instance and semantic segmentation approaches on the Cityscapes leaderboard  [45, 24, 52].

Fashion: We use the ModaNet dataset [51] to simulate a client that wants to improve its models’ performance in the task of object detection of fashion related objects. ModaNet is a large-scale street fashion dataset consisting of classes of objects and annotated images. Since the effectiveness of pre-training diminishes with the size of the dataset [23], we create a small version of the dataset for our experiments. This constitutes of training and validation images that are randomly selected but keeping the same class distribution of the original dataset. We call it miniModaNet in our experiments.

General Scenes: We use PASCAL VOC object detection  [17] as the client’s dataset for this scenario. The task in this case is object detection on object classes. We use the trainval2007 set containing images for training and evaluate on test2007 containing images.

Evaluation: We use Intersection-Over-Union (IoU) to measure client’s performance in its downstream task. Specifically, we follow the MS-COCO evaluation style and compute IoU at three different thresholds: a) , b) , c) an average of ten thresholds . The same evaluation style is used for both, object detection and instance segmentation. Notice however that in the case of instance segmentation, the overlap is based on segmented regions.

Baselines: In this regime, we compare our approach vs no pretraining, uniform sampling, and pretraining on the whole server dataset (i.e., MS-COCO). In all cases, we initialize with ImageNet pretrained weights as they are widely available and this has become a common practice.

Implementation Details: Client. We use Mask-RCNN [24] with a ResNet50-FRN backbone detection head as the client’s network. After obtaining a subset of , the client pre-trains a network on the selected subset and uses the pre-trained model as initialization for fine-tuning using the client (target) dataset. For object detection, we pre-train with a 681 class (80 class from COCO, 601 class from OpenImages) detection head using bounding box labels. For instance segmentation, we pre-train with 80 class (for COCO) or 350 class (for OpenImages) detection head using object mask labels.

Server. For all self-supervised experts, we use ResNet18  [25], and train our models to predict image rotations. MS-COCO and OpenImages are partitioned into and experts, respectively.

4.1.1 Qualitative and Quantitative Results

Object Detection: Table 1 reports the average precision at various IoU of the client’s network pre-trained using data selected using different budgets and methods. First, we see that a general trend of pre-training the network on sampled detection data helps performance when fine-tuning on smaller client detection datasets compared to fine-tuning the network from ImageNet initialization. By pre-training on 90K images from COCO+OpenImages, we observe a 1-5% gain in AP at 0.5 IoU across all 3 client (target) datasets. This result is consistent with [32] which suggests that a pre-training task other than classification is beneficial for improving transfer performance on localization tasks. Next, we see that under the same budget of 90K/180K images from the server, pre-training with data selected by NDS outperforms the baseline which uses images randomly sampled from for all client datasets.

Instance Segmentation: Table 2 reports the instance segmentation performance by sampling 23K, 47K, and 59K images from COCO for pre-training on Cityscapes. We can see that pre-training using subsets selected by NDS is 2-3% better than the uniform sampling baseline. Furthermore, using 40% (47K/118K), or 50% (59K/118K) images from COCO yields comparable (or better) performance to using the entire 100% (118K) of data. Table 3 shows the results of sampling 118K, 200K images from OpenImages dataset as our server dataset.

Qualitative Results: Figure 6 shows qualitative results on miniModaNet from detectors pre-trained from Imagenet, uniformly sampled images from , and images sampled using NDS. In the cases shown, the network pre-trained using the data recommended by NDS shows better localization ability, and is able to make more accurate predictions.

4.2 Support for Diverse Clients Same Task

For completeness, and in order to compare to stronger baselines that are limited to classification tasks, we also quantitatively evaluate the performance of NDS in the same-client-same-task regime. In this case, the task is set to be classification and the server indexes the Downsampled ImageNet  [12] dataset. This a variant of ImageNet  [16] resized to 3232. In this case, we use experts.

Client’s Datasets: We experiment with several small classification datasets. Specifically, we use Stanford Dogs [29], Stanford Cars [30], Oxford-IIIT Pets [40], Flowers 102 [38], and CUB200 Birds [48] as client datasets.

Implementation Details: We use ResNet18 [25] as our client’s network architecture, and an input size of during training. Once subsets of server data are selected, we pre-train on the selected subset and evaluate the performance by fine-tuning on the client (target) datasets.

Comparison to data selection methods: Cui et al[15] and Ngiam et al[37] recently proposed data selection methods for improving transfer learning for classification tasks. In this restricted regime, we can compare to these methods. Specifically, we compare our NDS with [37]

, where they sample data based on the probability over source dataset classes computed by pseudo-labeling the target dataset with a classifier trained on the source dataset. We also create a baseline KNN by adapting Cui

et al.’s method [15]. Here, we sample from the most similar categories measured by the mean features of categories between the client and server data. We emphasize that the previous two approaches are limited to the classification task, and cannot handle diverse tasks. Furthermore, they do not scale to datasets beyond classification, and [37] does not scale to a growing dataserver. Our approach achieves comparable results to [37], and can be additionally applied to source datasets with no classification labels such as MS-COCO, or even datasets which are not labeled.

Figure 7: Simulating an incrementally growing dataserver, and the time required to “train” a model to represent the server. We NDS compare to the baseline of [37] (which is limited to classification tasks).

4.3 Ablation Experiments

Domain Confusion: To see how well the performance of the proxy task reflects the domain confusion, we perform an experiment comparing the proxy task performance and . To estimate , we follow the same idea from [5, 11, 18] and for each subset , we estimate the domain confusion. Figure 4 shows the domain confusion vs the proxy task performance using several classification datasets as the target (client) domain. In this plot, the highest average loss corresponds to the subset with the highest domain confusion (i.e., that is the most indistinguishable from the target domain). Notice that this correlates with the expert that gives the highest proxy task performance.

Ablation on gating and expert training: In Table 4, we compare different instantiations of our approach on five client classification datasets. For all instantiations, pre-training on our selected subset significantly outperforms the pre-training on a randomly selected subset of the same size. Our result in Table 4 shows that under the same superclass partition, the subsets obtained through sampling according to the transferability measured by self-supervised experts (SP+SS) yield a similar downstream performance compared to sampling according to the transferability measured by the task-specific experts (SP+TS). This suggests that self-supervised training for the experts can successfully be used as a proxy to decide which data points from the source dataset are most useful for the target dataset.

Scalability: Fig 7 analyzes the (simulated) required training time of the server as a new dataset is being incrementally added to it. We simulate a comparison between [37] (which needs to retrain the model each time a dataset is added, and thus scales linearly) and NDS. Dataset Registry Adapt Experts Download Recommended Data
Figure 8: Our Neural Data Server web-service. Note that NDS does not host datasets, but rather links to datasets hosted by original providers.
Dataset Images Class Task Evaluation Metric
Downsampled ImageNet [12] 1281167 1000 classification -
OpenImages [31] 1743042 601(bbox) / 300(mask) detection -
COCO [33] 118287 80 detection -
VOC2007 [17] 5011(trainval) / 4962(test) 20 detection mAP
miniModaNet [51] 1000(train) / 1000(val) 13 detection mAP
Cityscapes [13] 2975(train) / 500(val) 8 detection mAP
Stanford Dogs  [29] 12000(train) / 8580(val) 120 classification Top-1
Stanford Cars  [30] 8144(train) / 8041 (val) 196 classification Top-1
Oxford-IIIT Pets  [40] 3680(train) / 3369(val) 37 classifiation Top-1
Flowers 102  [38] 2040(train) / 6149(val) 102 classification Top-1
CUB200 Birds  [48] 5994(train) / 5794(val) 200 classification Top-1
Table 6: Summary of the number of images, categories, and evaluation metrics for datasets used in our experiments. We used 10 datasets (3 server datasets and 7 client datasets) to evaluate NDS.
Data (# Images) Method car truck rider bicycle person bus mcycle train
0 ImageNet Initialization 36.2 62.3 32.0 57.6 49.9 30.8 23.2 17.1 30.0 52.4 17.9 35.2
23K Uniform Sampling 38.1 64.9 34.3 60.0 50.0 34.2 24.7 19.4 32.8 52.0 18.9 42.1
NDS 40.7 66.0 36.1 61.0 51.3 35.4 25.9 20.4 33.9 56.9 20.8 44.0
47K Uniform Sampling 39.8 65.5 34.4 60.0 50.7 31.8 25.4 18.3 33.3 55.2 21.2 38.9
NDS 42.2 68.1 36.7 62.3 51.8 36.9 26.4 19.8 33.8 59.2 22.1 44.0
59K Uniform Sampling 39.5 64.9 34.9 60.4 50.8 34.8 26.3 18.9 33.2 55.5 20.8 38.7
NDS 41.7 66.6 36.7 61.9 51.7 37.2 26.9 19.6 34.2 56.7 22.5 44.5
118K Full COCO 41.8 66.5 36.5 62.3 51.5 37.2 26.6 20.0 34.0 56.0 22.3 44.2
Table 7: Transfer to instance segmentation with Mask R-CNN [24] on Cityscapes by selecting images from COCO.
Data (# Images) Method car truck rider bicycle person bus mcycle train
0 ImageNet Initialization 36.2 62.3 32.0 57.6 49.9 30.8 23.2 17.1 30.0 52.4 17.9 35.2
118K Uniform Sampling 37.5 62.5 32.8 57.2 49.6 33.2 23.3 18.0 30.8 52.9 17.4 37.1
NDS 39.9 65.1 35.1 59.8 51.6 36.7 24.2 18.3 32.4 56.4 18.0 42.8
200K Uniform Sampling 37.8 63.1 32.9 57.8 49.7 31.7 23.8 17.8 31.0 51.8 18.4 38.8
NDS 40.7 65.8 36.1 61.2 51.4 38.2 24.2 17.9 32.3 57.8 19.7 47.3
Table 8: Transfer to instance segmentation with Mask R-CNN [24] on Cityscapes by selecting images from OpenImages.

Limitations and Discussion: A limitation in our method is that the annotation quality/statistics in the dataserver datasets is not considered. This is shown in our instance segmentation experiment where the gains from pre-training on images sampled from OpenImages is smaller than pre-training on MS-COCO. This is likely due to the fact that MS-COCO has on average 7 instance annotations per image while OpenImages contains many images without mask annotations or at most 2 instance annotations per image. OpenImages has further been labeled semi-automatically and thus in many cases the annotations are noisy.

5 Conclusion

In this work, we propose a novel method that aims to optimally select subsets of data from a large dataserver given a particular target client. In particular, we represent the server’s data with a mixture of experts trained on a simple self-supervised task. These are then used as a proxy to determine the most important subset of the data that the server should send to the client. We experimentally show that our method is general and can be applied to any pre-training and fine-tuning scheme and that our approach even handles the case where no labeled data is available (only raw data). We hope that our work opens a more effective way of performing transfer learning in the era of massive datasets.

In the future, we aim to increase the capability of NDS to also support other modalities such as 3D, text and speech.


The authors acknowledge partial support by NSERC. Sanja Fidler acknowledges the Canada CIFAR AI Chair award at Vector Institute. We thanks Relu Patrascu for his continuous infrastructure support. We also thanks to Amlan Kar, Huan Ling and Jun Gao for early discussions and Tianshi Cao for feedback in the manuscript.

Figure 9: Top 8 clusters from COCO+OpenImages corresponding to the best performing expert adapted on miniModaNet.
Figure 10: Top 8 clusters from COCO+OpenImages corresponding to the best performing expert adapted on Cityscapes.
Figure 11: Top 8 clusters from COCO+OpenImages corresponding to the best performing expert adapted on PASCAL VOC.
Pretrain. Sel. Method Target Dataset
Stanf. Dogs Stanf. Cars Oxford-IIIT Pets Flowers 102 CUB200 Birds
0% Random Init. 23.66 18.60 32.35 48.02 25.06
100% Entire Dataset 41.64 46.83 56.34 67.17 35.28
20% Uniform Sample 41.01 44.16 56.01 64.42 34.41
NDS 39.72 43.56 54.62 65.90 34.57
Table 9: MoCo Pretraining: Top-1 classification accuracy on five client datasets (columns) pretrained on the different subsets of data (rows) on the pretext task of instance discrimination.
Pretrain. Sel. Method Target Dataset
Stanf. Dogs Stanf. Cars Oxford-IIIT Pets Flowers 102 CUB200 Birds
0% Random Init. 23.66 18.60 32.35 48.02 25.06
100% Entire Dataset 47.83 55.87 67.54 78.99 44.25
20% Uniform Sample 42.74 42.82 60.25 72.59 39.30
NDS 43.33 43.34 61.49 72.85 40.47
Table 10: RotNet Pretraining: Top-1 classification accuracy on five client datasets (columns) pretrained on the different subsets of data (rows) on the pretext task of predicting image rotations.

6 Appendix

In the Appendix, we provide additional details and results for our Neural Data Server.

6.1 Web Interface

Our NDS is running as a web-service at We are inviting interested readers to try it and give us feedback. A snapshot of the website is provided in Figure 8.

6.2 Additional Results

We visually assess domain confusion in Figures 910, and 11. We randomly select 9 images per cluster and display the top 8 clusters corresponding to the experts with the highest proxy task performance in miniModaNet, Cityscapes, and VOC-Pascal. We can observe that the images from the top clusters do indeed reflect the types of objects one encounters for autonomous driving, fashion, and general scenes corresponding to the respective target (client) datasets, showcasing the plausibility of NDS.

We further extend Table 2 and 3 in the main paper by showing detailed instance segmentation results for fine-tuning on the Cityscapes dataset. We report the performance measured by the COCO-style mask AP (averaged over IoU thresholds) for the 8 object categories. Table 8 reports the mask AP by sampling 23K, 47K, and 59K images from COCO to be used for pre-training for Cityscapes, and Table 8 reports the mask AP by sampling 118K, 200K images from OpenImages for pre-training.

Self-Supervised Pretraining:

We evaluate NDS in a scenario where a client uses self-supervised learning to pretrain on the selected server data. We follow the same setup as described in Section 4.2, except that rather than pretraining using classification labels, clients ignore the availability of the labels and pretrain using two self-supervised learning approaches: MoCo 

[22] and RotNet [20]. In Table  9, we pretrain on the selected data subset using MoCo, an approach recently proposed by He et al., where the model is trained on the pretext task of instance discrimination. In Table 10, we use  [20] to pretrain our model on the pretext task of predicting image rotation. We observe that in the case of MoCo, pretraining on NDS selected subset does not always yield better performance than pretraining on a randomly sampled subset of the same size. In the case of RotNet, pretraining on NDS selected subset has a slight gain over the baseline of uniform sampling. These results suggest that the optimal dataset for pretraining using self-supervised learning may be dependent on the pretext task. More formal studies on the relationship connecting training data, pretraining task, and transferring performance is required.


  • [1] S. Abu-El-Haija, N. Kothari, J. Lee, A. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan (2016) YouTube-8m: a large-scale video classification benchmark. ArXiv abs/1609.08675. Cited by: §1.
  • [2] D. Acuna, A. Kar, and S. Fidler (2019) Devil is in the edges: learning semantic boundaries from noisy annotations. In CVPR, Cited by: §2.
  • [3] D. Acuna, H. Ling, A. Kar, and S. Fidler (2018) Efficient interactive annotation of segmentation datasets with polygon-rnn++. In CVPR, Cited by: §2.
  • [4] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan (2009) A theory of learning from different domains. Machine Learning 79, pp. 151–175. Cited by: §3.4, §3.4.
  • [5] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira (2007) Analysis of representations for domain adaptation. In NeurIPS, B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.), pp. 137–144. Cited by: §3.4, §4.3.
  • [6] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. Cited by: §2.
  • [7] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth (2017) Practical secure aggregation for privacy-preserving machine learning. In ACM Conf. on Computer and Communications Security, Cited by: §2.
  • [8] S. Caelles, A. Montes, K. Maninis, Y. Chen, L. V. Gool, F. Perazzi, and J. Pont-Tuset (2018) The 2018 davis challenge on video object segmentation. ArXiv abs/1803.00557. Cited by: §1.
  • [9] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2019) NuScenes: a multimodal dataset for autonomous driving. ArXiv abs/1903.11027. Cited by: §1.
  • [10] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2016) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.. CoRR abs/1606.00915. External Links: Link Cited by: §1.
  • [11] M. Chen, K. Q. Weinberger, Z. Xu, and F. Sha (2015)

    Marginalizing stacked linear denoising autoencoders

    Journal of Machine Learning Research 16 (1), pp. 3849–3875. Cited by: §3.4, §4.3.
  • [12] P. Chrabaszcz, I. Loshchilov, and F. Hutter (2017) A downsampled variant of imagenet as an alternative to the cifar datasets. External Links: 1707.08819 Cited by: §4.2, Table 6.
  • [13] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    CVPR, pp. 3213–3223. Cited by: §4.1, Table 6.
  • [14] G. Csurka (2017) Domain adaptation for visual applications: a comprehensive survey. arXiv preprint arXiv:1702.05374. Cited by: §2.
  • [15] Y. Cui, Y. Song, C. Sun, A. Howard, and S. J. Belongie (2018) Large scale fine-grained categorization and domain-specific transfer learning. CVPR, pp. 4109–4118. Cited by: §2, §2, §4.2, Table 5.
  • [16] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, Cited by: §2, §4.2.
  • [17] M. Everingham, L. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman (2009) The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88, pp. 303–338. Cited by: §4.1, Table 6.
  • [18] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. S. Lempitsky (2015) Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, pp. 59:1–59:35. Cited by: §3.4, §4.3.
  • [19] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, Cited by: §1.
  • [20] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In ICLR, External Links: Link Cited by: §3.2.2, §6.2.
  • [21] S. Gross, M. Ranzato, and A. Szlam (2017) Hard mixtures of experts for large scale weakly supervised vision. In CVPR, pp. 6865–6873. Cited by: §3.2.1.
  • [22] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2019) Momentum contrast for unsupervised visual representation learning. External Links: 1911.05722 Cited by: §6.2.
  • [23] K. He, R. B. Girshick, and P. Dollár (2018) Rethinking imagenet pre-training. CoRR abs/1811.08883. External Links: Link, 1811.08883 Cited by: §1, §4.1.
  • [24] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick (2017) Mask r-cnn. ICCV, pp. 2980–2988. Cited by: §1, §4.1, §4.1, Table 8.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. CVPR, pp. 770–778. Cited by: §4.1, §4.2.
  • [26] G. E. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. ArXiv abs/1503.02531. Cited by: §3.2.1.
  • [27] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton (1991) Adaptive mixtures of local experts. Neural Computation 3, pp. 79–87. Cited by: §3.2.1.
  • [28] A. Kar, A. Prakash, M. Liu, E. Cameracci, J. Yuan, M. Rusiniak, D. Acuna, A. Torralba, and S. Fidler (2019) Meta-sim: learning to generate synthetic datasets. In ICCV, Cited by: §2.
  • [29] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei (2011-06) Novel dataset for fine-grained image categorization. In CVPR Workshop on Fine-Grained Visual Categorization, Colorado Springs, CO. Cited by: §4.2, Table 6.
  • [30] J. Krause, M. Stark, J. Deng, and L. Fei-Fei (2013) 3D object representations for fine-grained categorization. In IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia. Cited by: §4.2, Table 6.
  • [31] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, T. Duerig, and V. Ferrari (2018) The open images dataset v4: unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982. Cited by: §1, §4.1, Table 6.
  • [32] H. Li, B. Singh, M. Najibi, Z. Wu, and L. S. Davis (2019) An analysis of pre-training on object detection. ArXiv abs/1904.05871. Cited by: §4.1.1.
  • [33] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1, §4.1, Table 6.
  • [34] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten (2018) Exploring the limits of weakly supervised pretraining. In ECCV, pp. 181–196. Cited by: §2.
  • [35] H. B. McMahan, E. Moore, D. Ramage, and B. A. y Arcas (2016) Federated learning of deep networks using model averaging. ArXiv abs/1602.05629. Cited by: §2.
  • [36] B. Mehta, M. Diaz, F. Golemo, C. J. Pal, and L. Paull (2019) Active domain randomization. arXiv preprint arXiv:1904.04762. Cited by: §2.
  • [37] J. Ngiam, D. Peng, V. Vasudevan, S. Kornblith, Q. V. Le, and R. Pang (2018) Domain adaptive transfer learning with specialist models. External Links: 1811.07056 Cited by: §1, §2, §2, Figure 7, §4.2, §4.3, Table 5.
  • [38] M-E. Nilsback and A. Zisserman (2008-12) Automated flower classification over a large number of classes. In Proc. of the Indian Conference on Computer Vision, Graphics and Image Processing, Cited by: §4.2, Table 6.
  • [39] S. J. Pan and Q. Yang (2009) A survey on transfer learning. IEEE Trans. on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: §2.
  • [40] O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar (2012) Cats and dogs. In CVPR, Cited by: §4.2, Table 6.
  • [41] N. Ruiz, S. Schulter, and M. Chandraker (2018) Learning to simulate. arXiv preprint arXiv:1810.02513. Cited by: §2.
  • [42] B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §2.
  • [43] E. Shelhamer, J. Long, and T. Darrell (2017-04) Fully convolutional networks for semantic segmentation. PAMI 39 (4), pp. 640–651. External Links: ISSN 0162-8828, Link, Document Cited by: §1.
  • [44] C. Sun, A. Shrivastava, S. Singh, and A. Gupta (2017) Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, pp. 843–852. Cited by: §2, §2.
  • [45] T. Takikawa, D. Acuna, V. Jampani, and S. Fidler (2019) Gated-scnn: gated shape cnns for semantic segmentation. ArXiv abs/1907.05740. Cited by: §4.1.
  • [46] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield (2018) Training deep networks with synthetic data: bridging the reality gap by domain randomization. In CVPR Workshop, pp. 969–977. Cited by: §2.
  • [47] S. Tripathi, S. Chandra, A. Agrawal, A. Tyagi, J. M. Rehg, and V. Chari (2019) Learning to generate synthetic data via compositing. In CVPR, pp. 461–470. Cited by: §2.
  • [48] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report Technical Report CNS-TR-2011-001, California Institute of Technology. Cited by: §4.2, Table 6.
  • [49] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In NeurIPS, Cited by: §2.
  • [50] A. R. Zamir, A. Sax, W. B. Shen, L. J. Guibas, J. Malik, and S. Savarese (2018) Taskonomy: disentangling task transfer learning. CVPR, pp. 3712–3722. Cited by: §2, §3.3.1.
  • [51] S. Zheng, F. Yang, M. H. Kiapour, and R. Piramuthu (2018) ModaNet: a large-scale street fashion dataset with polygon annotations. In ACM Multimedia, Cited by: §4.1, Table 6.
  • [52] Y. Zhu, K. Sapra, F. A. Reda, K. J. Shih, S. D. Newsam, A. Tao, and B. Catanzaro (2018) Improving semantic segmentation via video propagation and label relaxation. In CVPR, Cited by: §4.1.