1 Introduction
Problems of predicting structured output span a wide range of fields, including natural language understanding, speech processing, bioinfomatics, image processing, and computer vision, amongst others. Structured learning or prediction has been approached with many different models
[1, 5, 8, 9, 12], such as graphical models [7], large marginbased approaches [17], and conditional restricted Boltzmann machines
[11]. Compared with structured label classification, structured output regression is a less explored topic in both the machine learning and data mining community. Aiming at regression tasks, methods such as continuous conditional random fields
[13] have also been successfully developed. Nevertheless, a property shared by most of these previous methods is that they often make explicit and exploit certain structures in the output spaces, which is quite limited.The past decade has seen the great advance of deep neural networks in modeling highorder, nonlinear interactions. Our work here aims to extend such success to construct nonlinear functional mapping from highorder structured input to highorder structured output. To this end, we propose a deep Highorder Neural Network with Structured Output (HNNSO). The upper layer of the network implicitly focuses on modeling interactions among output, with a high order antoencoder that aims to recover correlations in the predicted multiple outputs; the lower layer network contributes to capture highorder input structures, using bilinear tensor products; and the middle layer constructs a mapping from input to output. In particular, we introduce a discriminative pretraining approach to guiding the focuses of these different layers of networks.
To the best of our knowledge, our model is the first attempt to construct deep learning schemes for structured output regression with highorder interactions. We evaluate and analyze the proposed model on multiple datasets: one from natural language understanding and two from image processing. We show stateoftheart predictive performances of our proposed strategy in comparison to other competitive methods.
2 HighOrder Neural Models with Structured Output
We regard a nonlinear mapping from structured input to structured output as consisting of three integral and complementary components in a highorder neural network. We name it as Highorder Neural Network with Structured Output (HNNSO). Specifically, given a input matrix and a output matrix , we aim to model the underlying mapping between the inputs and the outputs . Figure 1 presents a specific implementation of HNNSO. Note that other variants are allowed; for example, the dot rectangle may implement multiple layers.
The top layer network is a highorder denoising autoencoder (the green portion of Figure 1). In general, an autoencoder is used for denoising input data. In our model, we use it to denoise the predicted output resulting from the lower layers, so as to capture the interplays among output. Similar to the strategy employed by Memisevic in [10], during training, we randomly corrupt a portion of gold labels, and the perturbed data are then fed to the autoencoder. The hidden unit activations of the autoencoder are first calculated by combining two versions of such corrupted gold labels, using a tensor to capture their multiplicative interactions. Subsequently, the hidden layer is used to gate the top tensor to recover the true labels from the perturbed gold labels. As a result, the corrupted data force the encoder to reconstruct the true labels, in which the tensors and the hidden layer encode the covariance patterns among the output during reconstruction.
The bottom layer (red portion of Figure 1
) describes a bilinear tensorbased network to multiplicatively relate input vectors, in which a thirdorder tensor accumulates evidence from a set of quadratic functions of the input vectors. In our implementation, as in
[16], each input vector is a concatenation of two vectors. Unlike [16], we here concatenate two dependent vectors: the input unit () and its nonlinear, firstorder projected vector . Hence, the model explores the highorder multiplicative interplays not just among but also with the nonlinearly projected vector .We also leverage discriminative pretraining to help construct our functional mapping from structured input to structured output, in which we guide HNNSO to model the interdependency among output, among input, as well as that between input and output, where different layers of the network focus on different types of structures. Specifically, we pretrain the networks layerbylayer in a bottom up fashion, using the gold output labels. The inputs to the second layer and above are the outputs of the layer right below it, except for the top layer where the corrupted gold output labels are used as input. Doing so, the bottom layer is able to focus on capturing the input structures, and the top layer can concentrate on encoding complex interaction patterns among output. Importantly, the pretraining also makes sure that when finetuning the whole networks (will be discussed later), the inputs to the autoencoder have closer distributions and structured patterns as that of the true labels (as will be seen in the experimental section). Consequently, the pretrain helps the autoencoder to have inputs with similar structures in both learning and prediction making. Finally, we perform finetuning to simultaneously optimize all the parameters of the three layers. Unlike in the pretraining, we use the uncorrupted outputs resulting from the second layer as the input to the autoencoder.
Model Formulation and Learning As illustrated in the red portion of Figure 1, HNNSO first calculates quadratic interactions among the input and its nonlinear transformation. In detail, it first computes the hidden vector from the provided input . For simplicity, we apply a standard linear neural network layer (with weight and bias term ) followed by the transformation: where . Next, the first layer outputs are calculated as:
(1) 
The term here is similar to the standard linear neural network layer. The addition term is a bilinear tensor product with a thirdorder tensor . The tensor relates two vectors, each concatenating the input unit with the learned hidden vector . The computation for the second hidden layer is similar to that of the first hidden layer . When learning the denosing autoencoder layer (green portion of Figure 1), the encoder takes two copies of the input, namely , and feeds their pairwise products into the hidden tensor, i.e., the encoding tensor :
(2) 
Next, a hidden decoding tensor is used to multiplicatively combine with the input vector to reconstruct the final output . Through minimizing the reconstruction error, the hidden tensors are forced to learn the covariance patterns within the final output :
(3) 
In our study, we use an autoencoder with tied parameters for convenience. That is, the same tensor for and . Also, denoising is applied to prevent an overcomplete hidden layer from learning the trivial identity mapping between the input and output. In the denoising process, the two copies of inputs are corrupted independently. In our implementation, all model parameters can be learned by gradientbased optimization. We minimize over all input instances () the sumsquared loss error (note: crossentropy will be used for classification tasks) between the output vector on the top layer and the true label vector:
(4) 
Also, we employ standard regularization for all the parameters, weighted by . For our nonconvex objective function here, we deploy the AdaGrad [3] to search for the optimal model parameters.
3 Experiments
Baselines
We compared HNNSO’s predictive performance, in terms of Root Mean Square Error (RMSE), with six regression models: (1) the MultiObjective Decision Trees (MODTs)
[2, 6]; (2) a collection of Support Vector Regression (denoted as SVMReg) [15]with RBF kernel, each for one target attribute; (3) a traditional neural network, i.e., the Multiple Layer Perceptron (MLP) with one hidden layer and multiple output nodes; (4) the socalled multivariate multiple regression (denoted as MultivariateReg), which takes into account the correlations among the multiple targets using a matrix computation; (5) an approach that stacks the MultivariateReg on top of the MLP (denoted MLPMultivariateReg); and (6) the Gaussian Conditional Random fields (GaussianCRF)
[4, 13, 14], in which the outputs from a MLP were used as the CRF’s node features, and the square of the distance between two target variables was modeled by an edge feature. In our experiments, all the parameters of these baselines have been carefully tuned.SSTB  MNIST  USPS  
Methods  RMSE  relative error  RMSE  relative error  RMSE  relative error 
reduction  reduction  reduction  
MODTs  0.0567  34.2%  0.0739  33.1%  0.6487  13.8% 
SVMReg 
0.0452  17.4%  0.0602  17.9%  0.5977  6.4% 
MLP  0.0721  48.2%  0.0800  38.2%  0.6683  16.3% 
MultivariateReg  0.0614  39.2%  0.1097  54.9%  0.6169  9.3% 
MLPMultivariateReg  0.0705  47.0%  0.0791  37.5%  0.6059  7.7% 
GaussianCRF  0.0706  47.1%  0.0800  38.2%  0.6047  7.5% 
HNNSO 
0.0373    0.0494    0.5591   
Datasets
There recently have been a surge of interests in using realvalued, lowdimentional vector to represent a word or a sentence in the natural language processing (NLP). Our first experiment was set up in such a circumstance. Specifically, we used the Stanford Sentiment Tree Bank (SSTB) dataset
[16] that contains 11,855 movie review sentences. In the best embeddings reported in [16], each sentence is represented by a 25dimensional vector. We obtained these vectors from http://nlp.stanford.edu/sentiment/, and used the first 15 elements to predict the last 10 dimensions. Our second experiment used 10,000 examples from the test set of MNIST digit database ^{1}^{1}1http://yann.lecun.com/exdb/mnist/. On purpose, we employed PCA to reduce the dimension of the data to 30, resulting in 30 PCA components that are pairwise, linearly independent to each other. In our experiment, we used the first 15 dimensions to predict the last 15 dimensions. Our last experiment used the USPS handwritten digit database ^{2}^{2}2http://www.cs.nyu.edu/ roweis/data/usps_all.mat. We randomly sampled 1100 images from the original data set, and used the first half of the image (128 pixels) to predict the second half (128 pixels) of the image.General Performance
Table 1 presents the performance of different
regression models on the SSTB, MNIST, and USPS datasets. The results show that the HNNSO achieves significantly lower RMSE scores in comparison to other models. On all three datasets, the relative error reduction achieved by HNNSO over other methods was at least 6.4% (ranging between 6.4% and 54.9%).
Detailed Anaylsis
We use the SSTB dataset to gain some insights into the HNNSO’s modeling behavior. Performancewise, we have shown above that the HNNSO model achieved a RMSE score of 0.0373 on the SSTB data. Without pretraining, the error increases relatively by 9.4%.
Figure 2 further depicts the distribution of the first output variable of the data. The figure indicates that the distribution of the input with pretraining (middle), compared to that without pretraining (left), is closer to the distribution of the true labels (right).
Such structured patterns are important for the encoder as discussed earlier.
In Figure 3, we also show the input (gray boxes) and output (lightblue) of the autodecoder in HNNSO as well as the true labels (darkblue) on the SSTB data. Each box in each color group represents one of the ten output variables in the same order. Figure 3 shows that the patterns of the lightblue boxes are similar to that of the darkblue boxes. This suggests that the encoder is able to guide the output predictions to follow similar structured patterns as that of the true labels.
In Figure 4, we further depict the errors made by the HNNSO and SVMReg (the second best approach). Each box in each color group represents the error, calculated as predicted value minus its true value, achieved on each of the ten output variables in the same order. Figure 4
suggests that the errors on each output target made by HNNSO has narrow and consistent variances across the ten output targets. On the contrary, the variances of errors among the ten output targets obtained by the SVMReg are obviously larger, suggesting that SVMReg makes good prediction on some output targets without considering the interactions with other targets.
Visualization
Figure 5 plots three digits from the USPS data, including the true images (right) and their predictions made by HNNSO (left) and MLP (middle). The figure shows that HNNSO was able to recover the images well. In contrast, MLP yielded some missing pixels on the right halves of the images.
4 Conclusion
We propose a deep highorder neural network to construct nonlinear functional mappings from structured input to structured output for regression. We aim to jointly achieve the goal with complementary components that focus on capturing different types of interdependency. Experimental results on three benchmarking datasets show the advantage of our model over several competing approaches. In the future, we plan to explore our strategy with a hinge loss for structured label classification with applications in image labeling and scene understanding.
References
 [1] G. H. Bakir, T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting Structured Data (Neural Information Processing). The MIT Press, 2007.
 [2] H. Blockeel, L. D. Raedt, and J. Ramon. Topdown induction of clustering trees. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML ’98, pages 55–63, 1998.
 [3] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
 [4] H. Guo. Modeling shortterm energy load with continuous conditional random fields. In Machine Learning and Knowledge Discovery in Databases  European Conference, ECMLPKDD 2013, Prague, Czech Republic, September 2327, 2013, pages 433–448, 2013.
 [5] H. Guo and S. Létourneau. Iterative classification for multiple target attributes. J. Intell. Inf. Syst., 40(2):283–305, 2013.
 [6] D. Kocev, C. Vens, J. Struyf, and S. Džeroski. Ensembles of multiobjective decision trees. In ECML’07, pages 624–631, 2007.
 [7] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques  Adaptive Computation and Machine Learning. The MIT Press, 2009.
 [8] X. Li, F. Zhao, and Y. Guo. Multilabel image classification with a probabilistic label enhancement model. In UAI, 2014.

[9]
Y. Li and R. Zemel.
High order regularization for semisupervised learning of structured output problems.
In Proceedings of the Thirty First International Conference on Machine Learning, ICML ’14, 2014.  [10] R. Memisevic. Gradientbased learning of higherorder image features. In In Proceedings of the International Conference on Computer Vision, 2011.
 [11] V. Mnih, H. Larochelle, and G. E. Hinton. Conditional restricted boltzmann machines for structured output prediction. CoRR, abs/1202.3748, 2012.
 [12] S. Nowozin and C. H. Lampert. Structured learning and prediction in computer vision. Found. Trends. Comput. Graph. Vis., 6(4):185–365, Mar. 2011.
 [13] T. Qin, T. Liu, X. Zhang, D. Wang, and H. Li. Global ranking using continuous conditional random fields. In Advances in Neural Information Processing Systems 21, Proceedings of the TwentySecond Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 811, 2008, pages 1281–1288, 2008.

[14]
V. Radosavljevic, S. Vucetic, and Z. Obradovic.
Continuous conditional random fields for regression in remote
sensing.
In
Proceedings of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence
, pages 809–814, 2010.  [15] A. J. Smola and B. Schölkopf. A tutorial on support vector regression. Statistics and Computing, 14(3):199–222, Aug. 2004.
 [16] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. P. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013.
 [17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453–1484, Dec. 2005.
Comments
There are no comments yet.