Probabilistic Regressor Chains with Monte Carlo Methods

07/18/2019
by   Jesse Read, et al.
0

A large number and diversity of techniques have been offered in the literature in recent years for solving multi-label classification tasks, including classifier chains where predictions are cascaded to other models as additional features. The idea of extending this chaining methodology to multi-output regression has already been suggested and trialed: regressor chains. However, this has so-far been limited to greedy inference and has provided relatively poor results compared to individual models, and of limited applicability. In this paper we identify and discuss the main limitations, including an analysis of different base models, loss functions, explainability, and other desiderata of real-world applications. To overcome the identified limitations we study and develop methods for regressor chains. In particular we present a sequential Monte Carlo scheme in the framework of a probabilistic regressor chain, and we show it can be effective, flexible and useful in several types of data. We place regressor chains in context in general terms of multi-output learning with continuous outputs, and in doing this shed additional light on classifier chains.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

01/20/2015

Scalable Multi-Output Label Prediction: From Classifier Chains to Classifier Trellises

Multi-output inference tasks, such as multi-label classification, have b...
11/09/2012

Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains

Multi-dimensional classification (MDC) is the supervised learning proble...
12/26/2019

Classifier Chains: A Review and Perspectives

The family of methods collectively known as classifier chains has become...
06/07/2019

Rectifying Classifier Chains for Multi-Label Classification

Classifier chains have recently been proposed as an appealing method for...
10/20/2017

Dynamic classifier chains for multi-label learning

In this paper, we deal with the task of building a dynamic ensemble of c...
10/14/2021

Detecting Renewal States in Chains of Variable Length via Intrinsic Bayes Factors

Markov chains with variable length are useful parsimonious stochastic mo...
03/13/2022

PromptChainer: Chaining Large Language Model Prompts through Visual Programming

While LLMs can effectively help prototype single ML functionalities, man...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-dimensional data is ever-more present in industrial and scientific contexts. For example, multi-label classification has made a significant impact in the machine learning literature over recent years, where data points are naturally associated with multiple outputs

111Distinguished from multi-class classification in that a single output may take multiple values – but only one such value is assigned per output. Rather than a naive method of building one model per output; advanced methods can model the outputs together, resulting in better predictive performance. Although the potential of individual models is periodically revived under particular scenarios, the vast majority of literature proposes joint modeling, and as a result show improvement in both predictive performance and efficiency. A recent review of this area containing many useful references, is given in [41].

An established method in multi-label classification is that of classifier chains

, where a model is trained for each label, but estimates of the other models are used as additional features, in a cascaded chain along the target labels. This is exemplified (and contrasted to the naive approach) in fig:cc.

Figure 1: The naive vs chaining models. Each target is learned by a base model, where inputs are shown as incoming arrows, and output/prediction as an outgoing arrow.

This ‘chaining’ mechanism, although simple in its basic form, has proved successful in multi-label classification and provided dozens of modifications, extensions, and improvements, in the literature (see, e.g., [36, 10, 34, 11, 41, 35, 32] and references therein).

It is flexible, in the sense that any base model case be used to predict each label, and powerful, in the sense that even relatively simple base models lead to greater predictive performance than if they had been used separately.

Given its impact in multi-label classification, the motivation of this paper is to more-thoroughly investigate the use of the chaining approach in the regression context with continuous output variables. This investigation is needed since only greedy inference can be applied directly, but even in this case the application of chaining to the regression context is not straightforward and the effectiveness of such application is not yet widely explored.

After summarizing background work (sec:background), we provide a rigorous discussion and development of probabilistic regressor chains, including a survey of possible approaches (Sections 3 and 4). Following this, we develop a sequential Monte Carlo scheme (sec:smcrc). This scheme allows sampling of candidate paths through the target space, which is useful for many applications. We implement and test several of these on synthetic and real-world datasets involving multiple continuous outputs (sec:experiments), the results of which we reflect upon in detailed discussion (sec:discussion). After looking at connections to related and potential future work (sec:related) we draw conclusions and make recommendations (sec:conclusion).

2 Chain Methods

Given a dataset , where and , we are interested building a model that can provide predictions corresponding to target variables222In cases where it is not clear from context, we will denote as the value of the -th attribute of the -th instance, , for any given input observation .

In the multi-output classification scenario [36, 10, 34, 12], the -th output takes some value (we may highlight the popular case of binary outputs where each , known often as multi-label classification) or – in the case of continuous outputs, i.e., multi-output regression – then each .

A naive approach is to simply build a separate model for each target independently, such that

(1)

where each model has been built from training set , as a traditional single-output classifier or regressor according to the domain of (clearly, a regressor, if ). The exact class of model (type of regressor) is largely dependent on preference and/or driven by domain assumptions and constraints.

Particularly with regard to multi-output classification, a large volume of literature proposes a plethora of models that out-compete this baseline by modelling the dependencies among outputs; consider [36, 10, 40, 34, 11, 12, 41] and references therein. The multi-output regression literature is smaller in volume (consider, e.g., [3, 38]), being both more recently considered for specialized methods, and being more difficult to obtain better results with such methods – precisely a motivating factor behind this work.

In prior work ([36, 10, 34, 12], and many others), a chaining mechanism was studied for multi-label classification, known as classifier chains. Due to the relatively large amount of work in the literature on this topic, including continued recent interest (e.g., [20, 39] among many others), we argue it is worth studying this mechanism in particular with application to multi-output regression, which we will denote regressor chains.

It is useful to study regressor chains in light of previous developments in classifier chains. In its simplest form, the estimates of other labels are used as feature inputs for the following classifier, thus augmenting the input space along the chain. Given some test instance , we may obtain an estimate of as

(2)

where a recursion takes placed based on .

In the classification context, this is known as a classifier chain and, specifically, one with greedy inference. Note that models can be estimated individually and in parallel at training time, similarly to those in eq:BR, and – likewise – those models may be any class appropriate for each individual target (a binary classifier for binary output, etc).

In this case of greedy inference, a single path/combination is considered. However, note that this in particular is just one possible prediction, and not necessarily the best. The addition of probabilistic inference was an important development [10, 11], where a search is carried out, typically to obtain a MAP estimate333Although configurations for other losses are also possible; see [10], thus maximizing loss; hence

(3)

where is the space of possible paths over the outputs and is a pmf associated with the -th model (henceforth we use the abbreviation where suitable). This is known as a probabilistic classifier chain. The need for a probabilistic interpretation (i.e., pmfs

) can be addressed by building such models; logistic regression is a common choice.

One may observe that the search space in eq:pcc is exponential with the number of labels

. Indeed, complete inference (all branches of the probability tree are explored) is usually prohibitive. Therefore, one may consider the search space as a probability tree, and conduct a tree search, where each

provides a weight on each branch. fig:eg offers some visual intuition.

Figure 2: Probabilistic classifier chains where : As a probabilistic graphical model (left), and with two explored paths in in the probability tree. Note that the best path (in red, right) is not found by greedy inference. There are possible paths (). The label on each edge indicates (shown for explored paths).

Many search methods have been applied for this purpose (a survey is given in [26]). In particular we highlight the approach of Monte-Carlo sampling [34] as – unlike many of the search approaches – is directly applicable to regressor chains, and as such is most relevant to the material developed in sec:smcrc of this work. In this approach, samples are taken, , and weighted as

(4)
(5)

where . An example is shown in fig:eg where . A final prediction is obtained as

(6)

where (index of the maximum weight), and where is a complete sequence across label indices. Note that complexity is determined by .

It has been noticed that a chaining procedure can be applied in an off-the-shelf manner to the multi-output regression context involving continuous targets, namely with greedy inference; see, e.g., [3]. However, as we highlight and discuss in the following section, even though the application is identical, there are some major differences in the way regressor chains behave and the relative results they obtain (with respect to non-chained methods).

3 The Poor Behaviour of Regressor Chains

Applying greedy inference in chains in the case of regression is – exactly as in classification – a case of each output simply being “plugged in” to the following model as an additional feature. Recall that this simply means that predictions

are treated as fixed observations (i.e., and not random variables) when inferring

. Several papers have trialed this approach, e.g., [3]. Unfortunately, predictive performance is not as impressive wrt independent regression models. We explain why.

Recall eq:cc from above. A first choice of model in the regression context may be least squares where . Plugging this in to the equation, and supposing for simplicity input and outputs, we observe that444Let

and thus clearly we see that label predictions are superfluous in predicting

; the regressor chain in this context simply performs a series of linear transformations which can naturally be represented as a single transformation.

It may be tempting to argue for application to the case where is observed only at time . In this scenario we presume that inputs are received over time, thus estimating . Unfortunately, we may show that are still not needed wrt since all available information already comes from directly via earlier labels. This is seen clearly in illustration; fig:nonisotropic.

Figure 3: Even when arrives only at timestep , information can still be carried forward from (rather than via ), thus making the label cascade superfluous wrt the prediction as long as is well modeled, even in this case.

Another fundamental issue is the selection of loss metric. An obvious and popular choice for regression is based on the mean squared error (MSE) loss criterion (as indeed considered in the earlier experimentation of regressor chains in [3, 38] among others).

Under random variables , we see that the minimum MSE (MMSE) estimator under observation is given as follows,

wrt marginal densities (homologous to the pmfs of eq:joint). Let us emphasise that

is a vector of

integrals where

(8)

If we assume a Gaussian distribution

(a natural view of ordinary least squares

[17]) then again we may connect to the case discussed at the beginning of this section, where and so on for .

More generally, we are not only interested in obtaining point-wise estimators , but we are also interested in extracting all the statistical information encoded within this posterior

, such as uncertainty measures, credible intervals, and quantiles as well. So, our goal is to approximate complex integrals involving this posterior

(9)

Having an estimate of the shape of then allows us to estimate other values aside from the expected value (i.e., eq:si_number) such as the median or the mode. We remark that classifier chains typically predicts a mode in the form of a MAP estimate.

Notice, however, that – in spite of using the chain rule to factorize the joint in this manner this estimate (and hence related integrals, eq:si_number) are generally intractable. This is unlike in probabilistic

classifier chains where each pmf models a variable taking a finite number of discrete values – inherently more tractable in most cases.

In all cases an exhaustive search in the joint is intractable for a large number of outputs , hence efficient tree search methods such as Monte Carlo sampling (again, refer to fig:eg). However, a tree cannot be formed on continuous output space, and hence no such search is initially possible in the regression context. Discussion of the output space also brings us to identify a further pitfall in regression chains: the possibility of extensive “error propagation”. In classifier chains, this is well known, and easily detectable (e.g., in ). However, the space is inherently limited. On the other hand, under regressior chains, the estimate of greedy inference may completely degenerate and become lost in space, we progress down the chain towards .

Therefore we have isolated and discussed some main drawbacks to the application of greedy regressor chains: the type of loss functions often considered in regression (e.g., MSE) totally change the behaviour of the chaining methodology, and with a linear regression scheme, such as least squares, there is no benefit to the chaining mechanism. Even worse, error propagation may be catastrophic.

Having identified the lacking in regressor chains, as compared to their classification counterpart, in the following section we study the idea of probabilistic regressor chains which we later build on to suggest new methods.

4 Probabilistic Regressor Chains with Monte Carlo Search

In probabilistic classifier chains, inference may be cast as a search in the probability tree, as discussed above. Many tree-search methods are applicable. We gave an example of Monte Carlo search which takes weighted samples across the tree (recall eq:f_dyn_luca and eq:f_obs_luca). In the regression context there is no inherent tree to search. We could however create a tree by using particular values as nodes. This requires modeling of the marginals in such a way that we may draw samples

(10)

and in particular these samples should be at an ‘interesting’ part of the space, such as near the modes.

fig:demo2b shows an example illustration of samples forming paths and thus a tree with each complete path from root to leave as a possible prediction .

Figure 4: A hypothetical search tree through particle space along the paths for , . Nodes show samples and branches show evaluation of pdf . The yellow branch indicates resampling (see sec:smcrc) however this is shown for illustration; this extra branch is not required for inference.

This is an interesting idea, as with a cloud of samples, we can not only obtain a tree and apply existing methods used in the classification context,, but also easily obtain an approximation a MAP estimator as well, as in eq:PFE_luca_correct. Explicitly we can write this as:

(11)

and more generally (estimators other than MAP), a prediction may be given as path

(12)

chosen by function over a tree with branches defined across nodes and weighted by , i.e., holding the marginal branch cost from depth to by the -th sample. Recall fig:demo2b. It means that eq:the_argmax makes use of the fact that . In the remainder of this work we occasionally omit the subscript and use as the path cost from root to a leaf, i.e., where as in eq:joint (for notational simplicity).

Suppose we define a cost function based on path costs . For example, cost is given whenever crossing a low-density region according to the estimated s. To minimize expected loss,

we use our estimation of (and decomposition as ) and Monte Carlo samples to replace the integral, thus giving something like eq:the_argmax (depending on the choice of loss function).

A fundamental consideration is how to estimate each component of , and how to draw samples from it. The major issue at stake is that that a powerful non-linear predictor is required to justify chains (as discussed above) yet such a predict may not be ideal for sampling from. We address this issue with a sequential Monte Carlo method, presented in the following section. But let us first look briefly at some available alternatives, all of which could be considered a variation of probabilistic regressor chains.

4.1 Discretization and classification

A first approach to the inference problem is to simply discretize the label space and proceed from a classification perspective. This is not the same as taking samples (which is done at prediction time). As explained above, the classification perspective of chains is easier to justify, and a wealth of methods available. Of course not all problems are suited to discretization of the label space, but often it is suitable and even common practice, for example in the domain of reinforcement learning (see, e.g.,

[42]).

In this case we are learning (the RHS relates to eq:pcc) where bin ; and thus removing the need to estimate any integrals. Clearly it is fundamental to choose a suitable set of bins (which correspond to a finite set of class labels).

4.2 Bayesian regression

From the perspective of a greedy chain (earlier labels are fixed input), we may elabourate as Bayesian linear regression:

(13)

with sufficient statistics and . As a Gaussian, sampling from this distribution (e.g., as in eq:luca_cazzo) is straightforward. However, it based on a linear combination of and provides a unimodal Gaussian-shaped estimate, not suitable for many types of data.

4.3 Variational inference

We may approximate each555As a reminder, with some other distribution as in variational Bayesian methods, which turns inference into an optimization problem

(minimizing Kullback-Leibler divergence); see, e.g.,

[1]. We could then sample instead. Unlike MCMC methods, this approach does not provide an exact model of in the limit (with sufficient samples).

4.4 Density estimation

Let us denote . We may also model the target density

using a non-parametric method such as a Parzen window (i.e., a kernel density estimate, KDE), where

for some kernel (of bandwidth parameter ).

Sampling can be carried out for certain kernels such the Gaussian kernel. Although of course, the general disadvantages of kernel methods apply (quadratic complexity wrt number of examples, and difficulty in incremental modeling).

5 Sequential Monte Carlo Regressor Chains

The methods just described above are adequate to build a probabilistic regressor chain if the resulting approximation of is adequate for both sampling (obtaining ) and accurate evaluation (obtaining ). This restricts applicability considerably, and our selection is reduced especially with high-dimensional input observations. For example, many methods may provide an evaluation of a pdf, but not useful sampling.

In this section we build a state-space model for probabilistic regressor chains, separating the sampling and evaluation functions. It is essentially a particle filter (PF, see, e.g., [13]). We make some particular considerations and adaptations for its application in a probabilistic regressor chain.

5.1 The particle filter

To briefly review: a particle filter in a state space model consists of a model

(14)

running over time-steps , encompassing the transition function and observation function, and , respectively666Regarding notation, it is fundamental for those familiar with the literature on particle filters and continuous state-space models, to observe that in this work is the state, and is the observation; as in line with the machine learning literature. See [13] for an in-depth introduction and survey.

Under this notation, the vanilla particle filtering method for obtaining a marginal estimation (i.e., the prediction777In machine learning, prediction often refers to an estimation for the current instance, rather than for some future time instance, as often in dynamical systems terminology) for can be written as:

(15)
(16)
(17)

We highlight the strong connection to eq:f_dyn_luca–eq:PFE_luca_correct in Monte Carlo classifier chains. However, in the classification context, the posterior pmf can be sampled from easily; whereas here, since that might not be the case, we use the auxiliary function to propose samples at each step in the chain.

There are important differences from typical applications of particle filters, namely 1) in our case the model (eq:model) is learned from the data (i.e., no domain knowledge assumptions), 2) a single observation is relevant to an entire sequence of states (that is to say, the isotemporal case), and 3) the full cascade is considered rather than the standard single-order Markovian model.

5.2 Training

There are two functions which we need to learn; as according to eq:model: the transition function and observation function . Unlike the vanilla particle filter model described above, we wish to take into account the chain cascade,

Any suitable method for modeling this density can be considered, as long as we are able to sample from it (the methods given in subsections 4.14.4 are applicable here). Except, in this case we may remove which simplifies and speeds-up sampling.

For function we do only need to evaluate the function up to some normalizing constant, and thus we have more flexibility in choosing a function to learn. Since we involve observation , predictive power is particularly important at this step. Any method giving accurate

may be considered.

There is no need to impose a particular choice at this stage – we will address options in the experimental investigations in sec:experiments.

5.3 Inference

code:MSMC elaborates our Sequential Monte Carlo (SMC) scheme. It can be specifically designed to measure posterior ; namely to approximate the complex integrals involving of type eq:expectation which represents a MMSE estimator, or other loss functions that we are more interested in such as the mode.

Note that the Effective Sample Size (ESS) approximations is used to decide when the resampling step (and the MCMC/AIS schemes if required) as thus prevent sample degeneration (i.e., error propagation), either

is typically appropriate [23].

Step 2c in the algorithm is not strictly necessary, nevertheless this step may be useful in cases where the sequential scheme is struggling, and we include a detailed description in A, of the application of independent MCMC schemes, specifically, Metropolis-Hastings (MH) methods (at the -th iteration of the SMC algorithm above).

Of course, if a model efficiently meets both constraints for and (as just mentioned above) we can use the same model for both, thus simplifying eq:trans_weights and recovering the approach described in sec:smcrc. PFC: Sequential Monte Carlo (SMC) Method for PRCs

  • INPUT:

    • , , for all , from training stage

    • for all .

    • for ESS approximation

  • For :

    1. For :

      • Draw samples

      • Compute the transition weights

        (18)
    2. If :

      1. Resample according to normalized weights where (i.e., is resampled with probability ; with replacement).

      2. Set for all as per [24].

      3. (Optional) Apply steps of an MCMC or AIS method (A); and set (after iterations)

  • OUTPUT: Prediction as per eq:the_general,
    where

(a)
(b)
(c)
(d)
Figure 5: The first three figures above related to the synthetic data shown in fig:demo2 (and the forth the Andro dataset). For a given test instance , samples (shown in magenta) are taken across the chain (subfigs. (fig:m.a) and (fig:m.b) for and respectively) using function which is learned from the training data (shown in cyan). A separate function evaluates the fitness of these samples, providing weight , as shown implicitly in the size label of each sample. Samples can be viewed as trajectories (fig:m.c, and – for the Andro dataset – fig:m.d), and from this a final trajectory is decided as a prediction. Notice that in this dataset the true trajectory (denoted in blue) can be approximated by our method, whereas this is never the case under greedy chains (at least as clearly shown in the synthetic dataset). We remark also the bimodal nature of the distribution of Synth which is difficult to capture with standard regression methods. Details in code:MSMC and sec:experiments in general.

6 Experiments

In this section we compare some of the approaches we discussed, identified, and developed above. Namely, we compare independent regression models (IR), regressor chains with greedy inference (RC) with our Monte Carlo methods, discussed in sec:PRC as well as further developed as a Particle Filter in sec:smcrc (PFC). We compare different base estimators. Recall that PFC takes both a model for sampling and a model for evaluation – not necessarily from the same model class. These models are summarized and denoted as follows:

Key Algorithm
IR Independent Regression
RC Greedy Regressor Chains
MC Probabilistic Regressor Chains (Monte Carlo)
PF Probabilistic Regressor Chains (Particle Filter)
B Bayesian Regression
K

Kernel Ridge Regression (Gaussian Kernel)

grid search and
N Discretized label space, 30 bins (classes per label)

…with Neural Network Classifier (2 hidden layers each of size 30)

R

…with Random Forest Classifier, 100 estimators

such that (in the tables of results, tab:results), IR.B denotes independent Bayesian regression, PF.R/B denotes particle filter chains with a discretized random forest base classifier for sampling, and Bayesian regression for evaluation; and so on. Note that MC and PF are always set to maximize the approximation (by selecting a mode), whereas IR and RC (by default) maximize MSE, i.e., predict the mean. In both cases we consider samples/particles per test example; .

IR and RC are implemented in the well-known Scikit-Learn framework888https://scikit-learn.org. We implemented our novel contributions using the Scikit-Multiflow framework999https://scikit-multiflow.github.io/ [27]; which is based on Scikit-Learn. If not explicitly stated, then default parameters are used.

We carry out an empirical evaluation on synthetic and real-world datasets. The real-world sets are described and referenced in [38]101010Available online: http://mulan.sourceforge.net/datasets-mtr.html; covering a number of real-world applications involving predicting the multi-components of sea-water, residential buildings, concrete pouring, and natural resources; so as to over sufficient variety. The synthetic dataset is described and shown in fig:demo2.

Figure 6:

A bimodal joint distribution over two labels

. We suppose that both modes are equally probable given . Left: The true density. Right: hypothetical “paths” across . The black points/lines show estimates under a MAP estimator, red from an estimator of MSE, and yellow one possible results under MAE.

We consider the following loss metrics:

(mean squared error and absolute error, respectively) and denoting

as an approximation of mean loss (inverse accuracy) which approaches the -loss estimate as constant goes to . This loss is designed to reward models which find a joint mode, which should correspond to a path which is likely to occur in practice (e.g., in the training data).

Averaged results over 10-fold cross validation are provided in tab:results for the different metrics.

MSE:

Dataset IR.B IR.K RC.B RC.K MC.B MC.D PF.R/B PF.N/B
Synth 2 1.03 1.02 1.04 1.05 1.44 1.02 1.51 1.42
Andro 6 0.60 0.24 0.50 0.21 0.77 0.91 1.04 1.02
EDM 2 0.61 0.43 0.60 0.42 0.97 0.80 0.98 1.17
ENB 2 0.10 0.01 0.10 0.01 0.15 0.58 0.16 0.20
Jura 3 0.38 0.35 0.38 0.35 0.57 0.77 0.58 0.50
Slump 3 0.49 0.40 0.49 0.40 0.80 0.85 0.64 0.54
Avg Rank 3.67 2.67 2.33 1.67 6.00 6.33 6.67 6.67

MAE:

Dataset IR.B IR.K RC.B RC.K MC.B MC.D PF.R/B PF.N/B
Synth 2 1.01 1.00 1.01 1.01 1.00 1.00 0.91 0.89
Andro 6 0.62 0.33 0.56 0.31 0.70 0.76 0.95 0.75
EDM 2 0.63 0.46 0.62 0.44 0.80 0.62 0.88 0.87
ENB 2 0.22 0.07 0.22 0.06 0.28 0.64 0.32 0.36
Jura 3 0.41 0.39 0.41 0.39 0.56 0.66 0.53 0.51
Slump 3 0.54 0.42 0.55 0.43 0.67 0.76 0.61 0.56
Avg Rank 4.00 3.50 3.00 2.50 5.33 5.83 6.00 5.83

():

Dataset IR.B IR.K RC.B RC.K MC.B MC.D PF.R/B PF.N/B
Synth 2 1.00 1.00 1.00 1.00 0.94 1.00 0.51 0.59
Andro 6 0.98 0.72 0.98 0.69 1.00 1.00 1.00 0.98
EDM 2 0.87 0.69 0.86 0.64 0.88 0.78 0.82 0.70
ENB 2 0.21 0.02 0.21 0.01 0.32 0.81 0.39 0.47
Jura 3 0.73 0.70 0.73 0.70 0.88 0.98 0.89 0.87
Slump 3 0.82 0.65 0.82 0.65 0.91 0.91 0.76 0.81
Avg Rank 3.83 3.67 3.83 4.00 5.17 4.33 5.17 6.00
Table 1: Results of 10-fold cross validation.

7 Discussion

In this section we discuss empirical results. We look at these results with the goal of drawing conclusions about the behaviour of regressor chains in general, and secondly as justifying both acceptable performance and usability of our proposed methodologies.

The empirical results confirm that greedy regressor chains (RC) shows little to no advantage against independent estimators when a linear base model is used (only a small exception under the Andro dataset, if we are to compare RC.B and IR.B). These findings are completely in line with the analysis so far: classification models involve an inherent non-linearity (such as for example the sigmoid function in logistic regression) which adds predictive power via the chain structure, but this not inherently the case in regression.

We do not need to discuss the power of non-linear modeling for regression, as this is an elementary concept, but it is particularly interesting to observe how well regressor chains with a non-linear base learner (e.g., RC.K) can perform better on average (i.e., in terms of average rank) against its independent counterpart (IR.K). It is in this sense that we begin to find an argument to use regressor chains.

Although regressor chains may be effective, we point out the risk of degeneration of estimates across the chain with poorly regularized base models. Indeed, we found this using standard stochastic gradient descent linear regression, estimates diverged so far (obtaining greater than 1000 MSE) that it there was no point to include them in the table. This effect is well-mentioned in many analyses of classifier chains in the literature under the term of

error propagation, but in a multi-output binary classification setting, the posterior such propagation is always constrained between and . However, under RCs the trajectory may become increasingly lost and isolated (from the true path) in space as prediction progresses along the chain. Our choice of Bayesian regression adds some regularization which counters this. Nevertheless it is also one of the issues we took into account with our developments of probabilistic regressor chains.

Results show that acceptable predictive performance can be usually obtained by Monte-Carlo approaches (MC, PF), although on average it does not achieve top results overall. At first glance it seems difficult to justify either of these two methods in either of their configurations, but, we can take a closer look at particular evaluation metrics. We find that when it is important to find modes of the posterior (

), MC/B and PF./B outperform RC.B about half of the time. We can speculate that this would be similar in a hypothetical comparison of MC./K but the implementation we used (from ScikitLearn) does not offer sampling from this method. Furthermore, such a kernel-based approach has its own disadvantages (quadratic complexity, difficult application to incremental learning, and so on) which are limiting in modern settings of large and dynamic datasets. In addition, we can emphasise the important result on Synth: 1.00 (RC.K) vs 0.51 (PF.R/B) is one of the largest differences in performances obtained (actually 0.50 would be the Bayes optimal result for this data). Finally, and perhaps most importantly: Greedy RC provides almost no form of interpretation.

Results are on the Synth dataset are worth exploring in more detail (shown in fig:demo2; results shown in the first row in tab:results; detailed step-by-step illustration of performance in fig:main_figure). In particular, notice that any path crossing a low density region is unlikely to exist in practice, even though it provides the best estimate under MSE. In any kind of data with a multi-modal density in the output space, the our developed PF approach is highly suited, as it is able to track the path-evolution across different modes along its pass of the chain. Furthermore, it is able to offer interpretation or explainability of results by showing actual paths taken, and the density estimate via the cloud of particles; see, e.g., fig:main_figure.

It is a pity that the benchmark datasets we used as per related studies, do not seem complex enough to reveal this behaviour, although it is easy to provide examples of real-world possibilities. For example, in the prediction of trajectories for vehicles, it is useful only to estimate paths which do not cross out of transportation axes (for example, a truck does not pass through a river but over one of the bridges that cross it; for example as covered in [25]

in the discrete-waypoint scenario). Furthermore, it can be interesting to produce a set of possible trajectories rather than a single estimation. Likewise, in time series forecasting and anomaly detection, we may want to consider different hypotheses, rather than outputting a single estimator of maximum likelihood.

The need for interpretable models is certainly of increasing interest, as machine learning methods are used in more sectors, for example medical and security, where it is often essential to be able to provide detailed explanation of results.

For simplicity, in the Synth data, the observation does not affect the prediction for . It means that if that variable was added (in a real-world case) following building and deployment of the model predicting

, it would be more efficient to update the multi-output with a chain than an independent classifier. Of course on this dataset any discussion of efficiency is not warranted, but as the number of labels and instances grows, we can see chain methods operating as a kind of transfer learning, adapting partially pre-trained models by adding links from the outputs of those models. A further discussion is beyond the scope of this work.

8 Related Methods and Additional Considerations

To the best of our knowledge this is the first work treating regression chains in depth, and particularly from a probabilistic point of view with a sequential Monte Carlo approach. Nevertheless, since it is tackling a well-established problem (i.e., multi-output regression) it is equally important to contribute a discussion of related methods in other areas, and additional considerations treated in the classifier-chains literature already, as we do in this section.

8.1 Neural networks

A residual neural network (ResNet, [18]

) includes skip layers similar to the way a chain model does, except layer-wise rather than node-wise. It has obtained noteworthy performance on deep learning tasks. The original paper (2015) obtained good performance in image classification, though has also indeed been studied in the context of regression (such as in

[21]). Aside from its prevalence in classification, another difference is that in ResNets only the last layer is used for prediction, with other labels simply forming internal nodes. Although, similar multi-label architectures (including, e.g., [5, 31]) have been used which consider each (or at least several nodes/layers) as the targets, i.e., multiple targets. Although this work deals primarily with classification problems, this deep-learning approach tackles well one of the issues faced by regressor chains: needing strong non-linearities to serve as useful representations, as highlighted in our discussion and results. Other neural models have been considered specific to the multi-output regression case, including ensemble settings, e.g., [16].

8.2 Probabilistic models

A closely related approach to the inference in probabilistic regressor chains can be seen as performing a standard regression with noisy inputs [9, 19]. Indeed, suppose that , and

(19)

Under the assumption of additive noise, we can write

(20)
(21)

where are the regression models. For performing a proper inference, all the statistical features of should be taken into account (i.e., the uncertainty), hence a regression problem with noisy inputs. Gaussian Processes (GPs) provide a method relevant to this context, e.g., [29, 6], and in particular the idea of warped GPs [37, 22, 7] (a kind of hierarchical GP, thereby giving a kind of ‘depth’). Most of these cases are elaborated where . We remark (also with regard to discussion above) that there is of course no clear separating definition between (deep) neural network and probabilistic models.

8.3 State space models

The previous considerations have direct application to the inference and prediction in so-called state-space models. Such models are formed by a transition dynamic equation and an observation equation111111Once more, we have opted for notation where is the observation,

(22)

where is the state at time , and is perturbation noise. Note that even if the mappings and noises are known, the inference and prediction in this stochastic system can be interpreted, at each , as a noisy input regression problem since

(23)

showing strong parallels with the noisy regression model above. Furthermore, when the elements of the dynamic system are not deterministically known the problem becomes even more complex. In [8, 2], authors suggest to model , as two GPs.

8.4 Chain order

A natural question which arises is – does the order of the chain affect results? This question has been answered in the classification context (in fact, it does affect results) in various other works, mainly by the use of random ensemble methods (e.g., [36, 40]; each model with a different/random order), hill-climbing approaches (e.g., [12, 34]; the best of a number of trialed orders is taken) and methods based on label dependence (e.g., [26, 30] and references therein). Most of these methods are also directly applicable to the regression case; indeed, the chain-order space and complexity is exactly the same, and thus ensemble and hill-climbing methods need no specific adaptation; label-dependence based methods only need to consider dependence among continuous variables rather than discrete. Thus, there is no reason to suggest why the regression context needs special treatment, and for this reason we do not specifically readdress this consideration in this work.

8.5 Multiple passes

Another aspect to consider is why not pass over the set of variables (i.e., through the chain) multiple times per test instance. This is an interesting proposal and would be an easy extension to our methods: simply iterate over the chain a second time, and plug in training labels and treat them thenceforth as the target labels of interest. We notice that one can view this as a special case of regressor stacking as described in [3]

(in the general case, we simply feed all predictions as inputs to a second model – not necessarily a chain model). On the other hand, time complexity increases significantly with each pass along the chain. One can also see a connection to recurrent neural networks (RNN), as noticed in recent work by

[28], unrolled across time.

From the probabilistic perspective, an approach of many passes can be seen as a kind of Gibbs sampling (where the graph is undirected). In fact, an undirected and fully connected network (rather than a ‘chain’) removes the question of label order entirely. This has been developed in the context of conditional dependency networks as presented by [15] (for multi-label classification). We notice that this framework is also applicable to the regression case, whenever sampling from the conditional is possible; namely, (as in [15]) using Gibbs sampling to estimate the marginal mode:

(24)
(25)

(having adjusted the equations for continuous target variables) where indicates the initial samples during the burn-in time which are not considered. This is relatively straightforward to incorporate in the regression setting. It is, however, worth emphasising that (as found empirically in [34], for example), that the number of samples must be relatively much greater since a burn-in time of samples is required, and .

8.6 Other areas of application

Regressor chains can be applied to any problem involving multiple continuous output variables. Time series forecasting is a natural application, e.g., [29]. Previously classifier chains was applied to a discretized version of route prediction in urban traffic modeling [33]; and an application of regressor chains in continuous space would be perhaps even more natural.

One interesting similarity is to the application of Monte Carlo tree search for continuous action spaces in reinforcement learning (a good example of this is given in [42]). Forming a tree to search on top of a continuous space is part of the problem we tackle. We notice that the authors of this cited work use a kernel regression approach which is what we have empirically found to work well in this work on regressor chains. Unlike our study, there is no use of a chain in the sense of a cascade, which is the main focus of this work. Clearly investigation of the use of regressor chains in such related areas could be promising.

9 Conclusions

In this work we have looked at extending the approach of chaining models to the regression context. This was motivated by the fact that, although the idea had attracted initial interest and found potential applicability to multi-output regression tasks, the behaviour of regressor chains was found not to emulate that of classifier chains, thus warranting this in-depth study, to identify, unravel, and overcome the weaknesses of this application.

We identified, discussed, and dealt with several important issues in this regard:

  • Regressor chains are only useful when non-linearities are used in the base models, and this is equally applicable to the standard multi-output case, and the case where attribute values arrive in sequence

  • It is difficult to justify regressor chains for MSE estimation, unless paying particular attention to the previous point.

  • Error propagation (progressive divergence from the true path estimate) can be much more severe in the regression case due to the unbounded output space

  • Greedy inference in regressor chains, on top of the above points, is unable to provide useful representation or interpretation about the output space.

To our knowledge, this is the first work which looks in-depth at regressor chains in the probabilistic context. We surveyed a number of applicable methods, and developed our own based on sequential Monte Carlo methodology, particularly crafted to tackle these identified issues. We use non-linear base models as particular to the ‘chains’ methodology where base models themselves can be considered as a flexible hyperparameter, guided by aspects of the problem domain, rather than a hardwired design – this, we argue, should not be seen as a ‘nuisance’ parameter but as an attractive feature for adaptation across different areas. We provide a mode-seeking mechanism, rather than only (but as well as, potentially) an estimator for minimum squared error. Error propagation is controlled by a resampling scheme. And interpretation is provided by a tree generated by the point cloud of sample paths, which offer also a description of the underlying density.

Probabilistic regressor chains have peculiarities that distinguish it from other models, such as the full cascade involving all outputs and hyper-parametrization of the base models. However, to properly place regressor chains in context with the wider literature, we also identified and discussed connections to a range of related work which had not been noticed.

We may argue that our analysis not only facilitates understanding the performance of regressor chains that we develop, but also earlier throws more light into the performance considerations of classifier chains, modifications of which are still under active development and recent publication.

Our Monte Carlo methods suggest to be promising especially for tasks where path explainability (i.e., different hypotheses regrading the path taken through output space) is of more importance than outputting the result of a minimum mean squared error estimator. Such tasks include medical research, anomaly detection, de-noising, and missing-value imputation; providing plenty of application lines along which to develop this work further and built additional connections with related areas of the literature. In future work we also intend to explore areas (such as dynamic chains and recurrent models) that are being developed in parallel in the classification context to approach themes relating to chain order and structure.

References

  • [1] David Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012.
  • [2] H. Bijl, T. B. Schon, J. W. van Wingerden, and M. Verhaegen. System identification through online sparse Gaussian Process regression with input noise. In arXiv:1601.08068, pages 1–25, 2016.
  • [3] Hanen Borchani, Gherardo Varando, Concha Bielza, and Pedro Larrañaga. A survey on multi-output regression. Wiley Int. Rev. Data Min. and Knowl. Disc., 5(5):216–233, September 2015.
  • [4] Monica F. Bugallo, Luca Martino, and Jukka Corander. Adaptive importance sampling in signal processing. Digital Signal Processing, 47(Supplement C):36 – 49, 2015.
  • [5] Moustapha Cisse, Maruan Al-Shedivat, and Samy Bengio. Adios: Architectures deep in output space. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 2770–2779, New York, New York, USA, 20–22 Jun 2016. PMLR.
  • [6] Patrick Dallaire, Camille Besse, and Brahim Chaib-draa. An approximate inference with Gaussian Process to latent functions from uncertain data. Neurocomputing, 74(11):1945 – 1955, 2011.
  • [7] Patrick Dallaire, Camille Besse, and Brahim Chaib-draa. Deep Gaussian Processes.

    Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics (AISTATS)

    , pages 207–215, 2013.
  • [8] M. P. Deisenroth, M. F. Huber, and U. D. Hanebeck.

    Analytic moment-based Gaussian process filtering.

    In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 225–232, 2009.
  • [9] P. Dellaportas and D. A. Stephens. Bayesian analysis of errors-in-variables regression models. Biometrics, 51(3):1085–1095, 2009.
  • [10] Krzysztof Dembczyński, Weiwei Cheng, and Eyke Hüllermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML ’10: 27th International Conference on Machine Learning, pages 279–286, Haifa, Israel, June 2010. Omnipress.
  • [11] Krzysztof Dembczyński, Willem Waegeman, Weiwei Cheng, and Eyke Hüllermeier. On label dependence and loss minimization in multi-label classification. Mach. Learn., 88(1-2):5–45, July 2012.
  • [12] Krzysztof Dembczyński, Willem Waegeman, and Eyke Hüllermeier. An analysis of chaining in multi-label classification. In ECAI: European Conference of Artificial Intelligence, volume 242, pages 294–299. IOS Press, 2012.
  • [13] P. M. Djuric, J. H. Kotecha, Jianqui Zhang, Yufei Huang, T. Ghirmai, M. F. Bugallo, and J. Miguez. Particle filtering. IEEE Signal Processing Magazine, 20(5):19–38, Sept 2003.
  • [14] V. Elvira, L. Martino, D. Luengo, and M. F. Bugallo. Heretical multiple importance sampling. IEEE Signal Processing Letters, 23(10):1474–1478, 2016.
  • [15] Yuhong Guo and Suicheng Gu. Multi-label classification using conditional dependency networks. In IJCAI ’11: 24th International Conference on Artificial Intelligence, pages 1300–1305. IJCAI/AAAI, 2011.
  • [16] Esmaeil Hadavandi, Jamal Shahrabi, and Shahaboddin Shamshirband. A novel boosted-neural network ensemble for modeling multi-target regression problems. Engineering Applications of Artificial Intelligence, 45:204 – 219, 2015.
  • [17] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001.
  • [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 770–778, 2016.
  • [19] J. E. Johnson, V. Laparra, and G. Camps-Valls.

    A derivative-based variance estimate for Gaussian Process regression.

    Submitted, pages 1–20, 2018.
  • [20] Xie Jun, Yu Lu, Zhu Lei, and Duan Guolun. Conditional entropy based classifier chains for multi-label classification. Neurocomputing, 335:185 – 194, 2019.
  • [21] Stéphane Lathuilière, Pablo Mesejo, Xavier Alameda-Pineda, and Radu Horaud. A comprehensive analysis of deep regression. CoRR, abs/1803.08450, 2018.
  • [22] Miguel Lázaro-Gredilla. Bayesian Warped Gaussian Processes. In Advances in Neural Information Processing Systems 25, pages 1619–1627. 2012.
  • [23] L. Martino, V. Elvira, and F. Louzada. Effective sample size for importance sampling based on discrepancy measures. Signal Processing, 131:386 – 401, 2017.
  • [24] Luca Martino, Victor Elvira, and Gustau Camps-Valls. Group importance sampling for particle filtering and MCMC. Digital Signal Processing, 82:133 – 151, 2018.
  • [25] Luca Martino, Jesse Read, Victor Elvira, and Francisco Louzada. Cooperative parallel particle filters for online model selection and applications to urban mobility. Digital Signal Processing, 60(January):172–185, 2017.
  • [26] Deiner Mena, Elena Montañés, José Ramón Quevedo, and Juan José del Coz. Using A* for inference in probabilistic classifier chains. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3707–3713, 2015.
  • [27] Jacob Montiel, Jesse Read, Albert Bifet, and Talel Abdessalem. Scikit-MultiFlow: A multi-output streaming framework. Journal of Machine Learning Research, 18, 2018.
  • [28] Jinseok Nam, Eneldo Loza Mencía, Hyunwoo J Kim, and Johannes Fürnkranz. Maximizing subset accuracy with recurrent neural networks in multi-label classification. In Advances in Neural Information Processing Systems 30, pages 5413–5423, 2017.
  • [29] J. Quiñonero-Candela, A. Girard, and C. Rasmussen. Prediction at an uncertain input for Gaussian Processes and Relevance Vector Machines application to multiple-step ahead time-series forecasting. Technical Report, no. 1, pages 1–14, 2003.
  • [30] Jesse Read, Concha Bielza, and Pedro Larrañaga. Multi-dimensional classification with super-classes. Transactions on Knowledge and Data Engineering, 26(7):1720–1733, 2014.
  • [31] Jesse Read and Jaakko Hollmén. Multi-label classification using labels as hidden nodes. Technical Report 1503.09022v3, ArXiv.org, 2017. ArXiv.
  • [32] Jesse Read, Luca Martino, and Jaakko Hollmén. Multi-label methods for prediction with sequential data. Pattern Recognition, 63(Supplement C):45 – 55, 2017.
  • [33] Jesse Read, Luca Martino, and Jaakko Hollmén. Multi-label methods for prediction with sequential data. Pattern Recognition, 63(March):45–55, 2017.
  • [34] Jesse Read, Luca Martino, and David Luengo. Efficient Monte Carlo methods for multi-dimensional learning with classifier chains. Pattern Recognition, 47(3):1535–1546, 2014.
  • [35] Jesse Read, Luca Martino, Pablo M. Olmos, and David Luengo. Scalable multi-output label prediction: From classifier chains to classifier trellises. Pattern Recognition, 48(6):2096 – 2109, 2015.
  • [36] Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. Classifier chains for multi-label classification. Machine Learning, 85(3):333–359, 2011.
  • [37] E. Snelson, Z. Ghahramani, and C. Rasmussen. Warped Gaussian Processes. In Advances in Neural Information Processing Systems 16, pages 1–8. 2003.
  • [38] Eleftherios Spyromitros-Xioufis, Grigorios Tsoumakas, William Groves, and Ioannis Vlahavas. Multi-target regression via input space expansion: treating targets as inputs. Machine Learning, pages 1–44, 2016.
  • [39] Paweł Teisseyre.

    Ccnet: Joint multi-label classification and feature selection using classifier chains and elastic net regularization.

    Neurocomputing, 235:98 – 111, 2017.
  • [40] Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Random k-labelsets for multi-label classification. IEEE Transactions on Knowledge and Data Engineering, 23(7):1079–1089, 2011.
  • [41] Willem Waegeman, Krzysztof Dembczynski, and Eyke Huellermeier. Multi-target prediction: A unifying view on problems and methods. page ArXiV, 09 2018.
  • [42] Timothy Yee, Viliam Lisy, and Michael Bowling. Monte carlo tree search in continuous action spaces with execution uncertainty. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, pages 690–696. AAAI Press, 2016.

Appendix A Parallel Metropolis-Hastings (MH) chains

For completeness, we elaborate the algorithm based on [4, 14], starting from the equal weighted set of samples obtained at step 2a of code:MSMC, . After iterations, we obtain the final set of samples, illustrated as follows for the -th particle:

  • For

    1. Draw .

    2. Setting for simplicity , accept the movement with probability

      (26)

      Otherwise, with probability , set

Therefore final set of samples for an iteration is .