Introduction
Recently, deep structured networks such as deep convolutional (CNN) and recurrent (RNN) neural networks have become increasingly popular in artificial intelligence, showing remarkable performance on many realworld problems, including scene classification
[Krizhevsky, Sutskever, and Hinton2012], speech recognition [Hinton et al.2012], gaming [Mnih et al.2013, Mnih et al.2015], semantic segmentation [Papandreou et al.2015], and image captioning [Johnson, Karpathy, and FeiFei2016]. However, like most machine learning techniques, current deep learning approaches are based on conventional statistics and require the problem to be formulated in a structured way. In particular, they are designed to learn a model for a distribution (or a function) that maps a structured input, typically a vector, a matrix, or a tensor, to a structured output.
Consider the task of image classification as an example. The goal here is to predict a label (or a category) of a given image. The most successful approaches address this task with CNNs, i.e. by applying a series of convolutional layers followed by a number of fully connected layers [Krizhevsky, Sutskever, and Hinton2012, Simonyan and Zisserman2014, Szegedy et al.2014, He et al.2016]. The final output layer is a fixedsized vector with the length corresponding to the number of categories in the dataset (e.g. 1000 in the case of the ILSVR Challenge [Russakovsky et al.2015]
). Each element in this vector is a score or probability for one particular category such that the final prediction corresponds to a probability distribution over all classes. The difficulty arises when the number of classes is unknown in advance and in particular varies for each example. Then, the final output is generated heuristically by a discretization process such as choosing the
highest scores [Gong et al.2013a, Wang et al.2016], which is not part of the learning process. This shortcoming concerns not only image tagging but also other problems like detection or graph optimization, where connectivity and graph size can be arbitrary.We argue that such problems can be naturally expressed with sets rather than vectors. As opposed to a vector, the size of a set is not fixed in advance, and it is invariant to the ordering of entities within it. Therefore, learning approaches built on conventional statistics cannot be the right choice for these problems. In this paper, we propose a learning approach based on point processes and finite set statistics to deal with sets in a principled manner. More specifically, in the presented model, we assume that the input (the observation) is still structured, but the output is modelled as a set. Our approach is inspired by a recent work on set learning using deep neural networks [Rezatofighi et al.2017]. The main limitation of that work, however, is that the approach employs two sets of independent weights (two independent networks) to generate the cardinality and state distributions of the output set. In addition, to generate the final output as a set, sequential inference has to be applied instead of joint inference. In this paper, we derive a principled formulation for performing both learning and inference steps jointly. The main contribution of the paper is summarised as follows:

We present a novel way to learn both cardinality and state distributions jointly within a single deep network. Our model is learned endtoend to generate the output set.

We perform the inference step both jointly and optimally. We show how we can generate the most likely (the optimal) set using MAP inference for our given model.

Our approach outperforms existing solutions and achieves stateoftheart results on the task of multilabel image classification on two standard datasets.
Related Work
Handling unstructured input and output data, such as sets or point patterns, for both learning and inference is an emerging field of study that has generated substantial interest in recent years. Approaches such as mixture models [Blei, Ng, and Jordan2003, Hannah, Blei, and Powell2011, Tran et al.2016], learning distributions from a set of samples [Muandet et al.2012, Oliva, Póczos, and Schneider2013], modelbased multiple instance learning [Vo et al.2017]
and novelty detection from point pattern data
[Vo et al.2016], can be counted as few out many examples that use point patterns or sets as input or output and directly or indirectly model the set distributions. However, existing approaches often rely on parametric models,
e.g. the elements in output sets needs to be derived from an independent and identically distributed (i.i.d.) Poisson point process distribution [Adams, Murray, and MacKay2009, Vo et al.2016]. Recently, deep learning has enabled us to use less parametric models to capture highly complex mapping distributions between structured inputs and outputs. Somewhat surprisingly, there are only few works on learning sets using deep neural networks. One interesting exception in this direction is the recent work of Vinyals et al. vinyals2015order, which uses an RNN to read and predict sets. However, the output is still assumed to have an ordered structure, which contradicts the orderless (or permutation invariant) property of sets. Moreover, the framework can be used in combination with RNNs only and cannot be trivially extended to any arbitrary learning framework such as feedforward architectures. Another recent work proposed by Zaheer et al. zaheer2017deep is a deep learning framework which can deal with sets as input with different sizes and permutations. However, the outputs are either assumed to be structured, e.g. a scalar as a regressing score, or a set with the same entities of the input set, which prevents this approach to be used for the problems that require output sets with arbitrary entities. Perhaps the most related work to our problem is a deep set network recently proposed by Rezatofighi et al. rezatofighi2017deepsetnet which seamlessly integrates a deep learning framework into set learning in order to learn to predict sets in two challenging computer vision applications, image tagging and pedestrian detection. However, the approach requires to train two independent networks to model a set, one for cardinality and one for state distribution. Our approach is largely inspired by this latter work but overcomes its limitation on independent learning and inference.To validate our model, we apply it on the multilabel image classification task. Despite its relevance, there exists rather little work on this problem that makes use of deep architectures. One example is Gong et al. Gong:2013:arxiv, who combine deep CNNs with a top approximate ranking loss to predict multiple labels. Wei et al. Wei:2014:arxiv propose a HypothesesPooling architecture that is specifically designed to handle multilabel output. While both methods open a promising direction, their underlying architectures largely ignore the correlation between multiple labels. To address this limitation, recently, Wang et al. Wang_2016_CVPR proposed a model that combines CNNs and RNNs to predict an arbitrary number of classes in a sequential manner. RNNs, however, are not suitable for set prediction mainly for two reasons. First, the output represents a sequence and not a set, and is thus highly dependent on the prediction order, as was shown recently by Vinyals et al. vinyals2015order. Second, the final prediction may not result in a feasible solution (e.g. it may contain the same element multiple times), such that postprocessing or heuristics such as beam search must be employed [Vinyals, Fortunato, and Jaitly2015, Wang et al.2016]. Here we show that our approach not only guarantees to always predict a valid set, but also outperforms previous methods.
Background
To better explain our approach, we first review some mathematical background and introduce the notation used throughout the paper. In statistics, a continuous random variable
is a variable that can take an infinite number of possible values. A continuous random vector can be defined by stacking several continuous random variables into a fixed length vector,. The mathematical function describing the possible values of a continuous random vector and their associated joint probabilities is known as a probability density function (PDF)
such thatIn contrast, a random finite set (RFS) is a finiteset valued random variable . The main difference between an RFS and a random vector is that for the former, the number of constituent variables, , is random and the variables themselves are random and unordered. Throughout the paper, we use for a set with unknown cardinality, for a set with known cardinality and for a vector (or an ordered set) with known dimension .
A statistical function describing a finiteset variable is a combinatorial probability density function which consists of a discrete probability distribution, the socalled cardinality distribution, and a family of joint probability densities on both the number and the values of the constituent variables [Mahler2007, Vo et al.2017], i.e.
(1)  
where is the cardinality distribution of the set and is a symmetric joint probability density distribution of the set given known cardinality . The normalisation factor between and appears because the probability density for a set with known cardinality must be equally distributed among all the possible permutations of the corresponding vector [Mahler2007, Vo et al.2017]. is the unit of hypervolume in the feature space, which cancels out the unit of the probability density making it unitless, and thereby avoids the unit mismatch across the different dimensions (cardinalities). Without this normalizing constant, the comparison between probabilities of the sets with different cardinalities is not properly defined because a distribution with the smallest set size will always have the highest probabilities. For example, always holds regardless of the particular choice for and . Please refer to [Vo et al.2017] for an intuitive discussion.
Finite Set Statistics provides powerful and practical mathematical tools for dealing with random finite sets, based on the notion of integration and density that is consistent with the point process theory [Mahler2007].^{1}^{1}1A random finite set can be viewed as a simple finite point process [Baddeley, Bárány, and Schneider2007].
For example, similar to the definition of a PDF for a random variable, the PDF of an RFS must sum to unity over all possible cardinality values and all possible element values as well as their permutations. This type of statistics, which is derived from the point process stochastic process, defines basic mathematical operations on finite sets such as functions, derivatives and integrations as well as other statistical tools such as probability density function of a random finite set and its statistical moments
[Mahler2007, Vo et al.2017]. For further details on point processes, we refer the reader to textbooks such as [Chiu et al.2013, Daley and VereJones2007, Moller and Waagepetersen2003].Conventional machine learning approaches, such as Bayesian learning and convolutional neural networks, have been proposed to learn the optimal parameters
of the distribution which maps the input vector to the output vector . In this paper, we instead propose an approach that can learn the parameters for a set distribution that allows one to map the input vector to the output set , i.e. . For mathematical convenience, we use an i.i.d.cluster point process model. Moreover, we target applications where the order of the outputs during training is irrelevant, e.g. multilabel image classification. Modifying the application or the i.i.d. assumption to noni.i.d. set elements, may require to deal with the complexity of permutation invariant property of sets during the learning step, which leads to serious mathematical complexities and is left for future work.Joint Deep Set Network
We follow the convention introduced in [Rezatofighi et al.2017] and define a training set , where each training sample is a pair consisting of an input feature (e.g. image), and an output (or label) set . In the following we will drop the instance index for better readability. Note that denotes the cardinality of set . Following the definition in Eq.( 1), the probability density of a set with an unknown cardinality is defined as
(2)  
where denotes the collection of parameters which model both the cardinality distribution of the set elements as well as the parameters of
that model the joint distribution of set element
values for a fixed cardinality . Note that in contrast to previous works [Rezatofighi et al.2017, Vo et al.2016, Vo et al.2017] that assume that two sets of independent parameters (two independent networks) are required to represent the set distribution , we will show that one set of parameters is sufficient to learn this distribution and as it turns out also yields better performance.The above formulation represents the probability density of a set which is very general and completely independent of the choices of both cardinality and state distributions. It is thus straightforward to transfer it to many applications that require the output to be a set. However, to make the problem amenable to mathematical derivation and implementation, we adopt two assumptions: i) the outputs (or labels) in the set are derived from an independent and identically distributed (i.i.d.)cluster point process model, and ii) their cardinality follows a categorical distribution parameterised by event probabilities . Thus, we can write the distribution as
(3)  
where denotes the probability of taking on the state in a singleton set , and is the vector of event probabilities, i.e. and .
Posterior distribution
To learn the parameters , we assume that the training samples are independent from each other and that the distribution from which the input data is sampled is independent from both the output and the parameters. Then, the posterior distribution over the parameters can be derived as
(4)  
A closedform solution for the integral in Eq. (4
) can be obtained by using conjugate priors:
where and represent respectively a categorical distribution with the event probabilities and a Dirichlet distribution with parameters . Moreover,
can be assumed a zeromean normal distribution with covariance equal to
, i.e. . The key difference between our method and [Rezatofighi et al.2017] is that we only need to use one network as opposed to two networks used in the previous work. It is important to note that our method jointly predicts both cardinality and the set elements as opposed to sequentially predicting the cardinality first and then the set elements as previously done in [Rezatofighi et al.2017]. We have provided a comparison between the graphical models of both methods in terms of plate notation in Fig. 1 to further illustrate their differences.We assume that the cardinality follows a categorical distribution whose event probabilities vector
is estimated from a Dirichlet distribution with parameters
, which can be directly estimated from the input data . Note that the cardinality distribution in Eq. (3) can be replaced by any other discrete distribution, e.g. Poisson, binomial or negative binomial (cf. [Rezatofighi et al.2017]). Here, we use the categorical distribution as the cardinality model, which better suits the task at hand. The rationale here is that Poisson and negative binomial are longtailed distributions and their variance increases with their mean. Therefore, the final model will have more uncertainty (and possibly a higher error) in estimating the cardinality of the sets with high values. In contrast, the categorical distribution does not have the drawback of correlating its mean and variance. Moreover, in the image tagging application, the maximum cardinality is often known and there is no need to use longtailed distributions, which are more suitable for the applications where the maximum cardinality is unknown.
Consequently, the integral in Eq. (4) is simplified and forms a DirichletCategorical distribution
(5) 
where is the number of samples in the training set with cardinality , and is the total number of training samples. Finally, the full posterior distribution can be written as
(6)  
Learning
For simplicity, we use a point estimate for the posterior , i.e. , where is computed using the MAP estimator, i.e. . Therefore, we have
(7)  
describes a neural network with coefficients learned to map the input to the output (label) . This function represents the state distribution of each set element over the state space. is the regularisation parameter, proportional to the predefined covariance parameter . This parameter is also known as the weight decay parameter and is commonly used in training neural networks.
For example, in the application to multilabel image classification, represents the existence of a specific label in the input image instance from all predefined labels. In this application, we can rewrite an equivalent binary formulation for the above MAP problem as
(8)  
where represents the existence or nonexistence of any specific label in the image .
can be defined as a binary logistic regression function
where is the network’s predicted output corresponding to the label.
Note that can generally be learned using a number of existing machine learning techniques. In this paper we rely on deep CNNs to perform this task. More formally, to estimate , we compute the partial derivatives of the objective function in Eq. (8
) and use standard backpropagation to learn the parameters of the deep neural network.
Inference
Having learned the network parameters , for a test image , we use a MAP estimate to generate a set output as
(9) 
where , and as above. Therefore, the MAP estimate can be written as follows,
(10)  
Since the unit of hypervolume in this application is unknown, we assume it as a constant hyperparameter, estimated from the validation set of the data.
To solve the above inference problem, we define the binary variable
for existence of each label similar to the learning process. Therefore, an equivalent formulation for Eq. (10) is(11)  
where The above problem can be seen as a combination of a higherorder term,
(12) 
which accounts for the total number of selected variables, and a linear binary program, , where and
(13) 
Therefore, we can rewrite it as
(14) 
Since for each specific cardinality , the most likely set corresponds to the highest values of , the optimal solution for can be found efficiently when the sorted values of , here denoted by , and is maximised w.r.t. :
(15) 
Then, the optimal
can be obtained by solving a simple linear program:
(16) 
Note that the optimal solution to the problem in Eq. (16) are exactly those variables that correspond to the highest values of .
Experimental Results
To validate our proposed joint set learning approach, we perform experiments on the task of multilabel image classification. This is an appropriate application for our model as its output is expected to be in the form of a set (a set of labels in this particular case) with an unknown cardinality while the order of its elements (labels) in the output list does not have any meaning. Moreover, we assume that the labels are derived from an i.i.d.cluster process model. To make our work directly comparable to [Rezatofighi et al.2017], we use the same two standard and popular benchmarks, the PASCAL VOC 2007 dataset [Everingham et al.2007] and the Microsoft Common Objects in Context (MS COCO) dataset [Lin et al.2014].
Implementation details. We follow the same experimental setup used in [Rezatofighi et al.2017, Wang et al.2016]. Our model is built on the layers VGG network [Simonyan and Zisserman2014]
, pretrained on the 2012 ImageNet dataset. We adapt VGG for our purpose by modifying the last fully connected prediction layer to predict both cardinality and classification distributions according to the loss proposed in Eq. (
8), i.e. DC for cardinality and binary crossentropy for classification. We then finetune the entire network using the training set of these datasets with the same train/test split as in existing literature [Rezatofighi et al.2017, Gong et al.2013a, Wang et al.2016].To train our network, which we call JDS in the following, we use stochastic gradient descent and set the weight decay to
, with a momentum of and a dropout rate of. The learning rate is adjusted to gradually decrease after each epoch, starting from
. The network is trained for epochs for both datasets and the epoch with the lowest validation objective value is chosen for evaluation on the test set. The hyperparameter is set to be , adjusted on the validation set.To demonstrate that joint learning is helpful to learn a better classifier (state distribution) as well as a better cardinality distribution, we perform an additional baseline experiment where we replace the negative binomial (NB) distribution used in
[Rezatofighi et al.2017] with the DirichletCategorical (DC) distribution from Eq. (5). Then, an independent cardinality distribution network is trained using the same network structure as the one used in [Rezatofighi et al.2017] while modifying the final fully connected layer to predict the cardinality using the DC distribution. We finetune the network on cardinality distribution, initialised with the network weights learned for the classification task of each of the reported datasets, i.e. PASCAL VOC and MS COCO. To train the cardinality CNN, we use the exact same hyperparameters and training strategy as described above.Evaluation protocol.
We employ the common evaluation metrics for multilabel image classification also used in
[Gong et al.2013a, Wang et al.2016, Rezatofighi et al.2017]. These include the average precision, recall and F1score^{2}^{2}2F1score is calculated as the harmonic mean of precision and recall.
of the generated labels, calculated perclass (CP, CR and CF1) and overall (OP, OR and OF1). Since CP, CR and CF1 can be biased to the performance of the most frequent classes, we also report the average precision, recall and F1score of the generated labels per image/instance (IP, IR and IF1).We rely on F1score to rank approaches on the task of label prediction. A method with better performance has a precision/recall value that has a closer proximity to the perfect point shown by the blue triangle in Fig. 2. To this end, for the classifiers such as BCE and Softmax, we find the optimal evaluation parameter that maximises the F1score. For the deep set network (DS) [Rezatofighi et al.2017] and our proposed joint set network (JDS), prediction/recall is not dependent on the value of . Rather, one single value for precision, recall and F1score is computed.
PASCAL VOC 2007.
Classifier  Eval.  CP  CR  CF1  OP  OR  OF1  IP  IR  IF1 

Softmax  k=(1)  
BCE  k=(1)  
DS (BCENB) [Rezatofighi et al.2017] 
k=  
DS (BCEDC)  k=  
JDS (BCEDC)  k=  78.7  81.5  85.1 
We first test our approach on the Pascal Visual Object Classes benchmark [Everingham et al.2007], which is one of the most widely used datasets for detection and classification. This dataset includes images with a 50/50 split for training and test, where objects from predefined categories have been annotated by bounding boxes. Each image contains between and unique objects.
We first investigate if the joint learning improves the performance of cardinality and classifier. Fig. 2 shows the precision/recall curves for the classification scores when the classifier is trained solely using binary crossentropy (BCE) loss (red solid line) and when it is trained using the same loss jointly with the cardinality term (Joint BCE). We also evaluate the precision/recall values when the ground truth cardinality is provided. The results confirm our claim that the joint learning indeed improves the classification performance. We also calculate the mean absolute error of the cardinality estimation when the cardinality term using the DC loss is learned jointly and independently as proposed in [Rezatofighi et al.2017]. The mean absolute cardinality error of our prediction on PASCAL VOC is , while this error is when the cardinality is learned independently. We compare the performance of our proposed joint deep set network, i.e. JDS (BCEDC), with softmax and BCE classifiers with the best value as well as the deep set network [Rezatofighi et al.2017] when the classifier is binary cross entropy and the cardinality loss is negative binomial, i.e. DS (BCENB). In addition, Table 1 reports the results for the deep set network when the cardinality loss is replaced by our proposed DirichletCategorical loss, i.e. (BCEDC). The results show that we outperform the other approaches w.r.t. all three types of F1scores. In addition, our joint formulation allows for a single training step to obtain the final model, while the deep set network learns two VGG networks to generate the output sets.
Microsoft COCO.
Classifier  Eval.  CP  CR  CF1  OP  OR  OF1  IP  IR  IF1 

Softmax 
k=(3)  
BCE  k=(3)  
CNNRNN [Wang et al.2016]  k=(3)  
DS (SoftmaxNB) [Rezatofighi et al.2017]  k=  
DS (BCENB) [Rezatofighi et al.2017]  k=  
DS (BCEDC)  k=  
JDS (BCEDC)  k=  65.5  70.7  75.6  

The MSCOCO [Lin et al.2014] benchmark is another popular benchmark for image captioning, recognition, and segmentation. The dataset includes K images, each labelled with per instance segmentation masks of classes. The number of unique objects for each image varies between and . Around images in the training set do not contain any of the classes and there are only a handful of images that have more than tags. Most images contain between one and three labels. We use images with identical training and validation split as [Rezatofighi et al.2017], and the remaining images as test data.
The classification results on this dataset are reported in Table 2. The results once again show that our approach consistently outperforms our baselines and the other methods measured by F1score. Due to this improvement, we achieve stateoftheart results on this dataset as well. Some examples of label prediction using our joint deep set network and comparison with other deep set networks are shown in Fig. 3. The results show that our joint learning can simultaneously reduce the cardinality and classification errors in these examples.
Conclusion
We proposed a framework to jointly learn and predict a set’s cardinality and state distributions by modelling both distributions using the same set of weights. This approach not only significantly reduces the number of learnable parameters, but also helps to model both distributions more accurately. We have demonstrated the effectiveness of this approach on multiclass image classification, outperforming previous state of the art on standard datasets. The main limitation of our framework is that we do not include the complexity of permutation invariance of sets in the learning step. Therefore, our method is only applicable to set problems that do not rely on permutation invariance during training, such as image tagging. In future, we plan to overcome this limitation by incorporating permutation variables during training procedure. Another potential avenue could be to exploit the Bayesian nature of the model to include uncertainty as opposed to relying on the MAP estimation alone.
Acknowledgments. This research was supported by the Australian Research Council through the Centre of Excellence in Robotic Vision, CE140100016, and through Laureate Fellowship FL130100102 to IDR.
References

[Adams, Murray, and
MacKay2009]
Adams, R. P.; Murray, I.; and MacKay, D. J.
2009.
Tractable nonparametric Bayesian inference in Poisson processes with Gaussian process intensities.
In ICML, 9–16.  [Baddeley, Bárány, and Schneider2007] Baddeley, A.; Bárány, I.; and Schneider, R. 2007. Spatial point processes and their applications. Stochastic Geometry 1–75.
 [Blei, Ng, and Jordan2003] Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent dirichlet allocation. Journal of machine Learning research 3(Jan):993–1022.
 [Chiu et al.2013] Chiu, S. N.; Stoyan, D.; Kendall, W. S.; and Mecke, J. 2013. Stochastic geometry and its applications. John Wiley & Sons.
 [Daley and VereJones2007] Daley, D. J., and VereJones, D. 2007. An introduction to the theory of point processes: volume II: general theory and structure. Springer Science & Business Media.
 [Everingham et al.2007] Everingham, M.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. 2007. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
 [Gong et al.2013a] Gong, Y.; Jia, Y.; Leung, T.; Toshev, A.; and Ioffe, S. 2013a. Deep convolutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894.
 [Gong et al.2013b] Gong, Y.; Jia, Y.; Leung, T.; Toshev, A.; and Ioffe, S. 2013b. Deep convolutional ranking for multilabel image annotation. CoRR abs/1312.4894.
 [Hannah, Blei, and Powell2011] Hannah, L. A.; Blei, D. M.; and Powell, W. B. 2011. Dirichlet process mixtures of generalized linear models. Journal of Machine Learning Research 12(Jun):1923–1953.
 [He et al.2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR.
 [Hinton et al.2012] Hinton, G.; Deng, L.; Yu, D.; Dahl, G. E.; Mohamed, A.r.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T. N.; et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97.

[Johnson, Karpathy, and
FeiFei2016]
Johnson, J.; Karpathy, A.; and FeiFei, L.
2016.
DenseCap: Fully convolutional localization networks for dense captioning.
In CVPR.  [Krizhevsky, Sutskever, and Hinton2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. ImageNet classification with deep convolutional neural networks. In NIPS, 1097–1105.
 [Lin et al.2014] Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C. L.; and Dollár, P. 2014. Microsoft COCO: Common objects in context. arXiv:1405.0312 [cs]. arXiv: 1405.0312.
 [Mahler2007] Mahler, R. P. 2007. Statistical multisourcemultitarget information fusion, volume 685. Artech House Boston.
 [Mnih et al.2013] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.

[Mnih et al.2015]
Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare,
M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; et al.
2015.
Humanlevel control through deep reinforcement learning.
Nature 518(7540):529–533.  [Moller and Waagepetersen2003] Moller, J., and Waagepetersen, R. P. 2003. Statistical inference and simulation for spatial point processes. CRC Press.
 [Muandet et al.2012] Muandet, K.; Fukumizu, K.; Dinuzzo, F.; and Schölkopf, B. 2012. Learning from distributions via support measure machines. In NIPS.
 [Oliva, Póczos, and Schneider2013] Oliva, J.; Póczos, B.; and Schneider, J. 2013. Distribution to distribution regression. In ICML, 1049–1057.

[Papandreou et al.2015]
Papandreou, G.; Chen, L.C.; Murphy, K. P.; and Yuille, A. L.
2015.
Weakly and semisupervised learning of a deep convolutional network for semantic image segmentation.
In ICCV.  [Rezatofighi et al.2017] Rezatofighi, S. H.; Kumar BG, V.; Milan, A.; Abbasnejad, E.; Dick, A.; Reid, I.; et al. 2017. DeepSetNet: Predicting sets with deep neural networks. In ICCV.
 [Russakovsky et al.2015] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; and FeiFei, L. 2015. Imagenet large scale visual recognition challenge. IJCV 115(3):211–252.
 [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for largescale image recognition. CoRR abs/1409.1556.
 [Szegedy et al.2014] Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2014. Going deeper with convolutions. CoRR abs/1409.4842.
 [Tran et al.2016] Tran, N.Q.; Vo, B.N.; Phung, D.; and Vo, B.T. 2016. Clustering for point pattern data. In Pattern Recognition (ICPR), 2016 23rd International Conference on, 3174–3179. IEEE.
 [Vinyals, Bengio, and Kudlur2015] Vinyals, O.; Bengio, S.; and Kudlur, M. 2015. Order matters: Sequence to sequence for sets. ICLR.
 [Vinyals, Fortunato, and Jaitly2015] Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer networks. In NIPS, 2692–2700.
 [Vo et al.2016] Vo, B.N.; Tran, N.Q.; Phung, D.; and Vo, B.T. 2016. Modelbased classification and novelty detection for point pattern data. In ICPR.
 [Vo et al.2017] Vo, B.N.; Phung, D.; Tran, Q. N.; and Vo, B.T. 2017. Modelbased multiple instance learning. arXiv preprint arXiv:1703.02155.
 [Wang et al.2016] Wang, J.; Yang, Y.; Mao, J.; Huang, Z.; Huang, C.; and Xu, W. 2016. CNNRNN: A unified framework for multilabel image classification. In CVPR.
 [Wei et al.2014] Wei, Y.; Xia, W.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; and Yan, S. 2014. CNN: Singlelabel to multilabel. CoRR abs/1406.5726.
 [Zaheer et al.2017] Zaheer, M.; Kottur, S.; Ravanbakhsh, S.; Póczos, B.; Salakhutdinov, R.; and Smola, A. J. 2017. Deep sets. In NIPS.
Comments
There are no comments yet.