Kernel Mean Embedding of Instance-wise Predictions in Multiple Instance Regression

04/24/2019 ∙ by Thomas Uriot, et al. ∙ Imperial College London 0

In this paper, we propose an extension to an existing algorithm (instance-MIR) which tackles the multiple instance regression (MIR) problem, also known as distribution regression. The MIR setting arises when the data is a collection of bags, where each bag consists of several instances which correspond to the same and unique real-valued label. The goal of a MIR algorithm is to find a mapping from the instances of an unseen bag to its target value. The instance-MIR algorithm treats all the instances separately and maps each instance to a label. The final bag label is then taken as the mean or the median of the predictions for that given bag. While it is conceptually simple, taking a single statistic to summarize the distribution of the labels in each bag is a limitation. In spite of this performance bottleneck, the instance-MIR algorithm has been shown to be competitive when compared to the current state-of-the-art methods. We address the aforementioned issue by computing the kernel mean embeddings of the distributions of the predicted labels, for each bag, and learn a regressor from these embeddings to the bag label. We test our algorithm (instance-kme-MIR) on five real world datasets and obtain better results than the baseline instance-MIR across all the datasets, while achieving state-of-the-art results on two of the datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multiple instance learning (MIL) is a setting which falls under the supervised learning paradigm. Within the MIL framework, there exist two different learning tasks: multiple instance classification (MIC)

[1] and multiple instance regression (MIR) [4, 11]

. The former has been extensively studied in the literature while the latter has been underrepresented. This could be due to the fact that many of the data sources studied within the MIL framework are images and text, which correspond to classification tasks. The MIC problem generally consists in classifying bags into positive or negative examples where negative bags contain only negative instances and positive bags contain at least one positive instance. A multitude of applications are covered by the MIC framework. It has been applied to medical imaging in a weakly supervised setting

[20, 21] where each image is taken as a bag and sub-regions of the image are instances, to image categorization [3] and retrieval [23, 22] and to analyzing videos [12], where the video is treated as the bag and the frames are the instances.

On the other hand, the MIR problem, where bags labels are now real valued, has been much less prevalent in the literature. In a regression setting, as opposed to classification, one cannot simply identify a single positive instance. Instead, one needs to estimate the contribution of each of the instances towards the bag label. The MIR problem was first introduced in the context of predicting drug activity level

[4] and the first proposed MIR algorithm relied on the assumption that the bag’s label can be fully explained by a single prime instance (prime-MIR) [11]

. However, this is a simplistic assumption as we throw away a lot of information about the distribution (e.g, variance, skewness). Instead of assuming that a single instance is responsible for the bag’s label, the MIR problem has been tackled using a weighted linear combination of the instances

[16], or as a prime cluster of instances (cluster-MIR) [17]. Other works have looked at first efficiently mapping the instances in each bag to a new embedding space, and then train a regressor on the new embedded feature space. For instance, one can transform the MIR problem to a regular supervised learning problem by mapping each bag to a feature space which is characterized by a similarity measure between a bag and an instance [2]. The resulting embedding of a bag in the new feature space represents how similar a bag is to various instances from the training set. A drawback of this approach is that the embedding space for each bag can be high-dimensional when the number of instances in the training set is large, producing many redundant and possibly uninformative features.

In this paper, we use a similar approach and compute the kernel mean embeddings for each bag [9]. The use of kernel mean embedding in distribution regression has been applied to various real-world problems such as analyzing the 2016 US presidential election [6] and estimating aerosol levels in the atmosphere [13]

. Intuitively, kernel mean embedding measures how similar each bag is to all the other bags from the training set. In this paper, as opposed to previous works, we do not compute the kernel mean embeddings directly on the input features (i.e, on the instances) but on the predictions made by a previous learning algorithm (e.g, a neural network). This insight comes from the fact that a simple baseline algorithm (instance-MIR) performed surprisingly well on several datasets, when the regressor was a neural network with a large hidden layer

[14]. The instance-MIR algorithm essentially ignores the fact that we are in a distribution regression framework and treats each instance as a separate observation, thereby yielding a unique prediction for each instance. The final bag label is taken to be the mean or the median of the predictions for that given bag. However, using a point estimate in the original prediction space is a performance bottleneck. In this paper, we propose a novel algorithm (instance-kme-MIR), which leverages both the representational power of the instance-MIR algorithm equipped with a neural network and alleviate the aforementioned issue by mapping our predictions into a high or infinite-dimensional space, characterized by a kernel function. We test our approach on 5 remotely sensed real-world datasets.

2 Related Work

The datasets we are using to test our algorithm stems from remotely sensed data111http://www.dabi.temple.edu/~vucetic/MIR.html 222https://harvist.jpl.nasa.gov/papers.shtml, and have previously been described [14, 18] and studied as a distribution regression problem [19, 18, 14]. This allows us to compare the performance of our approach with the baseline instance-MIR and the current state-of-the-art. The first application (3 of the 5 datasets) consists in predicting aerosol optical depth (AOD) - aerosols are fine airborne solid particles or liquid droplets in air, that both reflect and absorb incoming solar radiation. The second application (2 of the 5 datasets) is the prediction of county-level crop yields [16] (wheat and corn) in Kansas between 2001 and 2005. These two applications can naturally be framed as a multiple instance regression problem. Indeed, in both applications, satellites will gather noisy measurements due to the intrinsic variability within the sensors and the properties of the targeted area on Earth (e.g, surface and atmospheric effects). For the AOD prediction task, aerosols have been found to have a very small spatial variability over distances up to 100 km [7]. For the crop data, we can reasonably assume that the yields are similar across a county and thus consider the bag label as the aggregated yield over the entire county.

The first study which investigated estimating AOD levels within a MIR setting, proposed an iterative method (pruning-MIR) which prunes outlying instances from each bag and then proceeds in a similar fashion as instance-MIR [19]

. The main drawback of this approach is that it is not obvious what the pruning threshold should be and we may thus get rid of informative instances in the process. In a subsequent work, the authors investigated a probabilistic framework (EM-MIR) by fitting a mixture model and using the expectation-maximization (EM) algorithm to learn the mixing and distribution parameters

[18]. The current state-of-the-art algorithm (attention-MIR) on the AOD datasets has been obtained by treating each bag as a set (i.e, an unordered sequence) of instances [14]. To do so, the authors implemented an order-invariant operation characterized by a content-based attention mechanism [15], which then attends the instances a selected number of times. Finally, the problem of estimating AOD levels has been tackled using kernel mean embedding directly on the input features (i.e, the instances) [13]

, where they show that performance is robust to the kernel choice but the hyperparameter values of the kernels are of primary importance. In this paper, however, we compute the kernel mean embeddings of the distributions of the predicted labels made by a neural network. In order to have a principled way to find the kernel parameters, authors have proposed a Bayesian kernel mean embedding model with a Gaussian process prior, from which we can obtain a closed form marginal pseudolikelihood

[5]. This marginal likelihood can then be optimized in order to find the kernel parameters.

3 Background

3.1 Multiple Instance Regression

In the MIR problem, our observed dataset is , where B is the number of bags, is the label of bag , is the instance of bag and is the number of instances in bag . Note that , and is a subset of , where is the number of features in each instance. The number of features must be the same for all the instances, but the number of instances can vary within each bag.

We want to learn the best mapping : , . By best mapping we mean the function which minimizes the mean squared error (MSE) on bags unseen during training (e.g, on the validation set). Formally, we seek such that

(1)

from the validation data , where is the hypothesis space of functions under consideration.

The two main challenges that the multiple instance regression problem poses are to find which instances are predictive of the bag’s label and to efficiently summarize the information from the instances within each bag. However, the instance-MIR baseline algorithm, which we describe next, does not attempt to solve the multiple instance regression problem by addressing the two aforementioned challenges. Instead, it simply treats each instance independently and fit a regression model to all the instances separately.

3.2 Instance-MIR Algorithm

As mentioned, the instance-MIR algorithm makes predictions on all the instances before taking the mean or the median of the predictions for each bag, as the final prediction. This means that during training, all the instances have the same weights and thus contribute equally to the loss function.

Formally, our dataset is formed by pairs of instance and bag label which can be denoted as . The final label prediction on an unseen bag can be simply calculated as

where is the predicted label corresponding to the instance in bag . Empirically, this method has been shown to be competitive [10], even though it requires models with high complexity in order to be able to effectively map many different noisy instances to the same target value. Thus, it is appropriate to take as a neural network with a large number of hidden units [14], as apposed to a small number [18].

3.3 Kernel Mean Embedding

In this subsection, we briefly describe kernel mean embedding and its application to distribution regression, where the goal is to compute the kernel mean embedding of each bag. We assume that the instances in each bag, are i.i.d. samples from some unobserved distribution , for . The idea is to adopt a two-stage procedure by first representing each set of samples (i.e, bags)

by its corresponding kernel mean embedding and then train a kernel ridge regression on those embeddings

[13].

Formally, let be a reproducing kernel Hilbert space (RKHS), which is a potentially infinite dimensional space of functions , and let be a reproducing kernel function of . Then for , we can evaluate at as an inner product

(reproducing kernel property). Then, for a probability measure

we can define its kernel mean embedding as

(2)

For to be well-defined, we simply require that the norm of is finite, and so we want such that . This is always true for kernel functions that are bounded (e.g, Gaussian RBF, inverse multiquadric) but may be violated for unbounded ones (e.g, polynomial) [13]. In fact, it has been shown that the kernel mean embedding approach to distribution regression does not yield satisfying results when using a polynomial kernel, due to the aforementioned violation [13].

However, as mentioned, we do not have access to but only observe i.i.d. samples drawn from it. Instead, we compute the empirical mean estimator of , given by

(3)

3.4 Kernel Ridge Regression

In kernel ridge regression (KRR), we seek to find the set of parameters , such that

(4)

where is the kernelized Gram matrix of the dataset, and is the hyperparameter controlling the amount of weight decay (i.e, regularization) on the parameters . In the case of KRR applied to kernel mean embedding, we have

(5)

where is the KRR kernel and ( is the number of bags in the training set). In this paper, we take to be the linear kernel, as it simplifies the computation and has been shown to yield competitive results when compared to non-linear kernels [13]. Thus, we have that

(6)
(7)

where is the instance of bag and is the instance of bag j. In order to make predictions on bags not seen during training, we simply compute

(8)

where is obtained by differentiating (4) with respect to , equating to 0, and solving for . Note that as mentioned in subsection 3.1, is the number of bags in the training set and is the number of unseen bags (e.g, in validation or testing set), and so .

4 Instance-kme-MIR Algorithm

In this section, we describe our novel algorithm (instance-kme-MIR), and discuss the choice we made for the hyperparameter values. We emphasise that the novelty in this paper is to compute the kernel mean embeddings on the predictions made by a previous learning algorithm, as opposed to previous works [13, 8, 6, 5], where the authors directly compute the kernel mean embeddings on the input features. Our algorithm can be seen as an extension of instance-MIR, where we take advantage of the representational power of neural networks (Part 1 of our algorithm), and address its performance bottleneck by computing the kernel mean embeddings on the predictions (Part 2 of our algorithm).

Inputs: (training set, validation set) =
Outputs: Bag level predictions on validation set

Part 1: Out-of-fold stacking of predictions made from instance-MIR

1:Set = number_folds
2:Initialize an array A with elements (i.e, number of training instances)
3:Choose a learning algorithm
4:Shuffle the bags in the training data
5:Partition the training data into folds, with an equal number of bags in each fold: , where
6:for  do
7:     Set counter = 0
8:     Set = (take all the bags except those in fold )
9:     Set = (take all the bags in fold )
10:     Learn : , , (see equation (1))
11:     for  do
12:         for  do
13:              Predict ,
14:              Set (build a stacked training set for Part 2)
15:              counter += 1          end for      end for end for
16:Return A

Part 2: Kernel mean embedding and KRR on the stacked dataset A

1:Choose the weight decay value
2:Choose the kernel function (and its parameter values)
3:Compute , (see equation (6)-(7))
4:Compute , where
5:Return

Part 3: Predict bag labels on the validation set

1:for  do
2:     for  do
3:         Predict ,      end for end for
4:Compute , ,
5:Compute , where (see equation (8))
6:Return (i.e, )
Algorithm 1 Instance-kme-MIR Algorithm

In our implementation333 https://github.com/pinouche/Instance-kme-MIR, we choose and to be a single layered neural network, as it was shown to yield good results for the instance-MIR algorithm [14]. We purposefully set the number of folds to be large, so that in Part 1 of our algorithm, we still train on of the training set. It thus makes sense to use the same hyperparameter values for the neural network when comparing the baseline instance-MIR to instance-kme-MIR. For Part 2, we experimented with two different kernels (RBF and inverse multiquadric).

5 Evaluation

5.1 Training Protocol

In order to fairly compare our algorithm to the current state-of-the-art [14, 18], we evaluate its performance using the same training and evaluation protocol. The protocol consists in a 5-fold cross validation, where the bags in the training set are randomly split into 5 folds, out of which 4 folds are used in training and 1 fold serves as the validation set. In turn, each of the 5 folds serves as the validation set and the 4 remaining folds as the training set. The cross validation is repeated 10 times in order to eliminate the randomness involved in choosing the folds. We use the root mean squared error (RMSE) to evaluate the performance and report our results, shown in Table 1, on 5 real-world datasets. While the baseline instance-MIR was already evaluated [14], we re-implement it on the 3 AOD datasets, with different hyperparameter values, and thus obtain distinct results. The validation loss reported in Table 1 below is the average loss over the 50 evaluations (10 iterations of 5-fold cross validation).

5.2 Results

In Table 1, we display the results for 4 algorithms: the baseline instance-MIR (described in subsection 3.2), attention-MIR [14], EM-MIR [18] and our novel algorithm (instance-kme-MIR), for two different kernels and , where

Note that prior to our implementation of instance-kme-MIR, the state-of-the-art results on the 5 datasets were shared between the 3 other algorithms [14]. Now, as can be seen in Table 1, attention-MIR achieves the best results on the AOD datasets while instance-kme-MIR yields the best results on the crop datasets.

We experimented with several values for and , where and , with a constant increment for both hyperparameters. The results in Table 1 are reported for the best hyperparameter values. We found that while extreme hyperparameter values negatively impacted the performance of our algorithm, most values yielded similarly good results, which means that our algorithm is robust to hyperparameter values.

Datasets
Algorithms MODIS MISR1 MISR2 WHEAT CORN
Instance-MIR (mean) 10.4 9.02 7.61 4.96 24.57
Instance-MIR (median) 10.4 8.89 7.50 5.00 24.72
Instance-kme-MIR () 10.1 8.68 7.28 4.91 24.40
Instance-kme-MIR () 10.1 8.70 7.38 4.90 24.51
EM-MIR [18] 9.5 7.5 7.3 4.9 26.8
Attention-MIR [14] 9.05 7.32 6.95 5.24 27.00
Table 1: The loss for the 3 AOD datasets (MISR1, MISR2, MODIS) is the RMSE 100 and for the 2 CROP datasets (WHEAT, CORN) the loss is the RMSE.

Instance-MIR (median) refers to the instance-MIR algorithm where the median is used to compute the final prediction for each bag, instead of the mean, as described in subsection 3.2. We can see that there does not seem to be an advantage to using the mean or the median, as both methods achieve very similar results. On the other hand, we can see that our algorithm consistently outperforms the baseline instance-MIR. However, note that since our algorithm makes use of the predictions made from instance-MIR (in Part 2 of Algorithm 1), we can only aim to achieve a measured improvement over the standard instance-MIR. Thus, our method is mostly beneficial in the cases where instance-MIR is the best out-of-the box algorithm (e.g, on the 2 crop datasets). Since our algorithm computes the kernel mean embedding between scalars (i.e, between the real-valued predictions) and is robust to values of and , it is easy to tune and its computation cost is very close to that of instance-MIR.

6 Conclusion

In this paper, we developed a straightforward extension of the baseline instance-MIR algorithm. Our method takes advantage of the expressive power of neural networks while addressing the main weakness of instance-MIR by computing the kernel mean embeddings of the predictions. We have shown that our algorithm consistently outperforms the baseline and achieves state-of-the-art results on the 2 crop datasets. In addition, our algorithm is robust to the kernel parameter values and its performance gains come at a low computational cost.

Nonetheless, it fails when the baseline instance-MIR does not yield satisfying results (e.g, on the 3 crop datasets). This is because we compute the kernel mean embeddings on predictions made from the baseline instance-MIR, and we can thus only expect measured improvements from that baseline. Another drawback of our method comes from the fact that instance-MIR assigns the same weights to all the instances during training. However, the number of instances per bag may vary and it would make sense to be more confident when we make a prediction on a bag which contains a large number of instances compared to a bag with only a few instances. To tackle this issue, we could take a Bayesian approach to kernel mean embedding and explicitly express our uncertainty in the sampling variability of the groups [8].

Finally, as future work, we could use the attention coefficients from attention-MIR, in order to weigh the contribution of each of the instances towards the loss function. This would get rid of potentially redundant and noisy instances, thus improving the quality of the training data.

References