1 Introduction
Multitask learning is concerned with simultaneously learning multiple prediction tasks that are related to one another. It has been frequently observed in the recent literature that, when there are relations between the tasks, it can be advantageous to learn them simultaneously instead of learning each task separately [7, 11]. A major challenge in multitask learning is how to selectively screen the sharing of information so that unrelated tasks do not end up influencing each other. Sharing information between two unrelated tasks can worsen the performance of both tasks.
Multitasking thus plays an important role in a variety of practical situations, including: the prediction of user ratings for unseen items based on rating information from related users [32], the simultaneously forecasting of many related financial indicators [19], the categorization of genes associated with a genetic disorder by exploiting genes associated with related diseases [15].
There is a vast literature on multitask learning. The most important lines of work include: regularizers biasing the solution towards functions that lie geometrically close to each other in a RKHS [12, 11], or lie in a low dimensional subspace [3, 26]; structural risk minimization methods, where multitask relations are established by enforcing predictive functions for the different tasks to belong to the same hypothesis set [2]; spectral [10, 4] and clusterbased [23, 38] assumptions on the task relatedness; Bayesian approaches where task parameters share a common prior [39, 9, 42]
; methods allowing a small number of outlier tasks that are not related to any other task
[40, 8]; approaches attempting to learn the full task covariance matrix [41, 20]. To our knowledge, no multitask attempts have been proposed for Hopfield networks (HNs) [21], whereas several studies investigated HNs as singletask classifier
[24, 27, 17, 22]. Indeed, HNs are efficient local optimizers, using the local minima of the energy function determined by network dynamics as a proxy to node classification.In this paper we develop HoMTask, Hopfield multitask Network, an approach to multitask learning based on exploiting a family of parametric HNs. Our approach builds on COSNet [6]
, a singletask HN proposed to classify instances in a transductive semisupervised scenario with unbalanced data. A main feature of HoMTask is that the energy function is extended to all tasks to be learned and to all instances (labeled and unlabeled), so as to learn the model parameters and to infer the node labels simultaneously for all tasks. The obtained network can be seen as a collection of singletask HNs, appropriately interconnected by exploiting the task relatedness. In particular, each task is associated with a couple of parameters determining the neuron activation values and thresholds, and we theoretically proved that in the optimal case, the learning procedure adopted is able to learn the parameters so as to move the multitask state of the labeled subnetwork to a minimum of the energy. This is an important result, which allows the model to better fit the input data, since the classification of unlabeled nodes is based upon a minimum of the unlabeled subnetwork. Another interesting feature of HoMTask is that the complexity of the learning procedure linearly increases with the number of tasks, thus allowing the model to nicely scale on settings including numerous tasks. Finally, a proof of convergence of the multitask dynamics to a minimum of the energy is also supplied.
Experiments on a realworld classification problem have shown that HoMTask remarkably outperforms singletask HNs, and has competitive performance with stateoftheart graphbased methods proposed in the same context.
2 Methods
2.1 Problem definition
The problem input is composed of an undirected weighted graph , where is the set of instances and the non negative symmetric matrix denotes the degree of functional similarity between each pair of nodes and . A set of binary learning tasks over is given, where for every task , is labelled with . The labeling is known only for the subset , whereas it is unknown for . Moreover, the subsets of vertices labelled with (positive) and (negative) are denoted by and , respectively, for each task . Without loss of generality, we assume and . As further assumption, task labelings are highly unbalanced, that is , for each . In the multitask scenario, a symmetric matrix is also given, where is an index of relatedness/similarity between the tasks and , and for each .
The aim is determining a set of bipartitions of vertices in for each task by jointly learning tasks in , on the basis of the prior information encoded in and .
In the following, the bold font is adopted to denote vectors and matrices, and the calligraphic font to denote multitask Hopfield networks. Moreover, we denote by
and the submatrices of relative to nodes and , respectively.2.2 Previous singletask modelling
In this section we recall the basic model proposed in [6, 13] for singletask modeling, named COSNet, that has inspired the multitask setting presented here. Essentially, it relies on a parametric family of the Hopfield model [21], where the network parameters are learned to cope with the label imbalance and the network equilibrium point is interpreted to classify the unlabeled nodes. A COSNet network over is a the triple , where denotes the neuron activation threshold (unique for all neurons), and is a parameter which determines the two neuron activation (state) values . The model parameters are appropriately learned in order to allow the algorithm to counterbalance the large imbalance towards negatives (see [13]). The initial state of a neuron is set to , if is positive, , if is negative, and when in unlabeled. The network evolves according to the following asynchronous dynamics:
(1) 
where is the state of neuron at time . At each time , the vector represents the state of the whole network. The network admits a state function named energy function:
(2) 
The convergence properties of the dynamics (1) depend on the weight matrix structure and the rule by which the nodes are updated. In particular, if the matrix is symmetric and the dynamic is asynchronous, it has been proved that the network converges to a stable state in polynomial time. As a major result, it has been shown that (2) is a Lyapunov function for the Hopfield dynamical systems with asynchronous dynamics, i.e., for each , and exists a time such that , for all . Moreover, the reached fixed point is a local minimum of (2). Then, a neuron in is classified as positive if , as negative otherwise.
2.3 Multitask Hopfield networks
A Hopfield multitask network, named HoMTask, with neurons is a quadruple , where is the task similarity matrix, , . The couple of parameters is associated with task , for each : by leveraging the approach adopted in COSNet, for a task , the neuron activation values are , whereas is the neuron activation threshold (the same for every neuron). Such a formalization allows to keep the absolute activation value in the range , and to calibrate it by suitably learning . For instance, in presence of a large majority of negative neurons, close to would prevent positive neurons to be overwhelmed during the net dynamics.
The state of the network is the matrix , where is the state vector corresponding to task . When simultaneously learning related tasks and , an usual approach consists in expecting that the higher the relatedness , the closer the corresponding states. In our setting, this can be achieved by minimizing
for any couple of tasks , with . To this end, we incorporate a term proportional to into the energy of , thus obtaining:
(3) 
where , is the dimensional vector made by all ones, and is a real hyperparameter regulating the multitask contribution. Without the second additive term in brackets, energy (3) would be the summation of the energy functions of independent singletask Hopfield networks, as recalled in the previous section.
By using the equalities
where denotes the inner product, and giving that
with , the energy (3) can be rewritten as:
(4) 
Informally, can be thought as interconnected singletask parametric Hopfield networks on , having all the same topology given by . In addition, the multitask energy term introduces self loops for all neurons, and a novel connection for each neuron with itself in the network , , whose weight is (see Fig.1).
It is worth nothing that usually in Hopfield networks there are no selfloops; nevertheless, we show that it does not affect the convergence properties of the overall network.
2.3.1 Update rule and dynamics convergence.
Starting from an initial state and adopting the asynchronous dynamic, in steps all neurons are updated in random order according to the following update rule:
(5) 
where is the state of neuron in task (th) at time , and
(6) 
is the input of the th neuron at time , whose terms are , , and . represents the singletask input (eq. 1), is the multitask contribution, and is the activation threshold for neuron , including also the ‘singletask’ threshold. The form of derives from the following theorem, stating a HoMTask Hopfield network preserves the convergence properties of a Hopfield network.
Theorem 2.1
Proof
Let be the energy contribution to (4) of the th neuron at time , with
Let be the energy variation after updating the state at time according to (5). Due to the symmetry of and , it follows
(7) 
Since the energy (4) is lower bounded, to complete to proof we need to prove that after updating at time according to (5), it holds . From (7), when , that is when the neuron does not change state, it follows . Accordingly, we need to investigate the remaining two cases: (a) and ; (b) and . In both cases it holds (by definition of ) .

, and, according to (5), . It follows .

, and . Thus .
Every neuron update thereby does not increase the energy of the network, and, since the energy is lower bounded, there will be a time from which the update of any given neuron will not change the current state, which is the definition of equilibrium state of the network, and which makes a local minimum of (4). ∎
2.3.2 Learning the model parameters.
Considered the subnetwork restricted to labeled nodes , its energy is:
(8) 
where with components belonging to the set , and .
The given bipartition for each task naturally induces the labeling , defined as it follows:
and constituting the known ‘multitask’ state .
Given as known components of a final state of the multitask network , the purpose of the learning step is to compute the pair (, ) which makes an energy global minimizer of (3), the energy function associated with . Since our aim is also keeping the model scalable on large sized data, and finding the global minimum of the energy requires time/memory intensive procedures, we employ a learning procedure leading towards an fixed point of , being in general a local minimum of (8). We provide the details of the learning procedure in the following, showing that such an approach also helps to handle the label imbalance at each task.
Maximizing a costsensitive criterion.
When the parameters are fixed, each neuron has input
where, for each and , if , otherwise. corresponds to of equation (6) restricted to ; to simplify the notation, in the following is thereby denoted by . Since the subnetwork is labeled, it is possible to define the set of true positive , false negative , and false positive , for every task . Following the approach proposed in [16], a set of membership functions can be defined, extending the crisp memberships introduced above:
(9) 
where is a suitable monotonically increasing membership function. For instance or . is a real parameter. If is the Heaviside step function, we obtain the crisp memberships. For example, when or , if and , if follows ; if and , it follows and . The intermediate cases lead to .
Such a generalization, in a different setting (singletask, multicategory) increased both the learning capability of the model and its classification performance [16]. By means of the membership functions (9), we can define the objective :
(10) 
where
and
is an appropriately chosen function, e.g. the mean, the minimum, or the harmonic mean function. The property
must satisfy is thatBy definition, (a generalization of the Fmeasure) is penalized more by the misclassification of a positive instance than by the misclassification of a negative one. By maximizing we can thereby cope with the label imbalance. To this end, the learning criterion for the model parameters adopted here is , which also leads to the following important result.
Theorem 2.2
If , then is an equilibrium state of the subnetwork .
Learning procedure.
Denoted by the vector of model parameters, this procedure learns the values that maximize eq. (10), that is Following the approach proposed in [16], we adopt the simplest search method [28], which adopts an iterative and incremental procedure estimating in turn a single parameter at a time, by fixing the other ones, until a suitable criterion is met (e.g. convergence, number of iterations, etc.). It thereby allows the complexity of the learning procedure to increase linearly with the number of tasks. In particular, for a fixed assignment of parameters , we estimate with the value , . The learning procedure is sketched below:

Randomly permute the vector , and randomly initialize ;

Determine an estimate of with a standard line search procedure for optimizing continuous functions of one variable, and fix ;

Iterate Step 2 for each ;

Repeat Step 3 till a stopping criterion is satisfied.
As stopping criterion we used a combination of the maximum number of iterations and of the maximum norm of the difference of two subsequent estimates (falling below a given threshold). As initial test, at Step 2 we simply adopted a grid search optimization algorithm, where a set of trials is formed for each parameter, and all possible parameter combinations are assembled and tested.
2.3.3 Label inference.
Once the parameters have been estimated, we consider the subnetwork restricted to the unlabeled nodes , whose energy is
(11) 
with state of , , , and is the vector of activation thresholds for task , including the contribution of labeled nodes (which are clamped).
In the case the learned parameters make a part of global minimum of , by determining the global minimum of , we can successfully determine the global minimum of (as stated by the following theorem), and consequently the solution of the problem.
Theorem 2.3
Given a multitask Hopfield network on neurons , bipartitioned into the sets and , if is a part of a global minimum of the energy of , and is a global minimum of the energy of , then is a global minimum of the energy of .
On the other side, computing the energy global minimum of would require time intensive algorithms; accordingly, to preserve the model efficiency and scalability, we run the dynamics of till an equilibrium state is reached, which, in general, is an energy local minimum.
Given an initial state , at each time one neuron is updated, and in consecutive steps all neurons are updated asynchronously and in a randomly chosen order according to the following update rule:
(12) 
where is the state of neuron at time , and is the restriction of to . According to Theorem 2.1, the dynamics (12) converges to an equilibrium state of , and the predicted bipartition for task is: and .
2.3.4 Dynamics regularization.
As shown by [13], the network dynamics might get stuck in trivial equilibrium states when input labeling are highly unbalanced —e.g. states made up by almost all negative neurons. To prevent this behaviour, they applied a dynamics regularization, with the aim to control the number of positive neurons in the current state. By extending that approach, and denoted by the proportion of positives in the training set for task , we add to the energy function the regularization term
(13) 
where , , and is a real regularization parameter. Since and are such that when , otherwise, the is the number of positive neurons in . The term (13) is thereby minimized when the number of positive neurons in is . This choice is motivated by the fact that
when and are randomly drawn from —see [13]. By simplifying eq. (13), up to a constant terms, we obtain the quadratic term:
which can be thereby included into :
where and . By adding a regularization term for each task , we obtain the following overall energy:
Informally, this regularization leads to a different network topology for each task, in addition to a modification of the neuron activation thresholds. Nevertheless, since the connection weights are modified by a constant value, from an implementation standpoint this regularization just need to memorize different constant values, thus not increasing the space complexity of the model. As preliminary approach, and to have a fair comparison, the parameters have been set as for the singletask case [13], that is where is a non negative real constant. Another advantage of this choice is that we have to learn just one parameter , instead of dedicated parameters.
3 Preliminary results and discussion
In this section we evaluate our algorithm on the prediction of the biomolecular functions of proteins, a binary classification problem aiming at associating sequenced proteins with their biological functions. In the following, we describe the experimental setting, analyze the impact on performance of parameter configurations, and finally we compare our algorithm against other stateoftheart graphbased methods.
3.1 Benchmark data
In our experiments we considered the Gene Ontology [5] terms, i.e. the reference functional classes in this context, and their annotations to the Saccaromyces cerevisiae (yeast) proteins, one of the most studied model organisms. The connection matrix has been retrieved from the STRING database, version 10.5 [35], and contains yeast proteins. As common in this context, the GO terms with less than and more than yeast protein annotations (positives) have been discarded, in order to have a minimum of information and to avoid too generic terms —GO is a DAG, where annotations for a term are transferred to all its ancestors. We considered the UniProt GOA (release , 12 March 2018) experimentally validated annotations from all GO branches, Cellular Component (CC), Molecular Function (MF) and Biological Process (BP), for a total of , , and CC, MF, BP GO terms, respectively.
3.2 Evaluation setting
To evaluate the generalization capabilities of our algorithm, we used a fold cross validation (CV), and measured the performance in terms of Area Under the ROC curve (AUC) and Area Under the PrecisionRecall curve (AUPR). The AUPR has been adopted in the recent CAFA2 international challenge for the critical assessment of protein functions [25], since in this imbalanced setting AUPR is more informative than AUC [33].
3.3 Model configuration
HoMTask has three hyperparameters, , and , and two functions to be chosen: in eq. (9), and in eq. (10). , and were learned through internal fold CV, considering also the cases and in turn or together clamped to , to evaluate their individual impact on the performance. A different discussion can be made for the parameter, since in our experimentations best performance correspond to large values of (e.g. ), thus making the model less sensitive to this choice (the function becomes a Heaviside function). This behaviour apparently conflicts with results reported in [16], where typically performed best. However, in that work the authors focused on a substantially different learning task, i.e. a singletask Hopfield model, where nodes were divided into categories, and the model parameters were not related to different tasks, but to different node categories. We still include in the formalization proposed in Section 2.3 because it permits also future analytic studies about the derivatives of , to determine close formulations for the optimal parameters. Further, We set , since this choice in a multicategory context leaded to excellent results [16], even if different choices are possible (Section 2.3).
On the other side, we tested two choices for : the harmonic mean () and mean functions (). Furthermore, another central factor of our model is the computation of the task similarity matrix , which can be computed by using several metrics (see for instance [15]), and how to group the tasks that should be learned together. We employed in this work the Jaccard similarity measure, since it performed nicely in hierarchical contexts [15, 37, 14], defined as follows:
Thus, is the ratio between the number of instances that are positive for both tasks and the number of instances that are positive for at least one task. The higher the number of shared instances, the higher the similarity (up to ); conversely, if two task do not share items, their similarity is zero.
Finally, we grouped tasks by GO branch, and by GO branch and number of positives: in the first case (Branch), all tasks within a GO branch are learned simultaneously; in the latter one (Card), tasks in the same branch having , , or  positives have been grouped together. Both approaches are quite usual when predicting GO terms [31, 14].
Table 1 reports the obtained results on the terms. First, the two different strategies for grouping tasks led to similar results in this setting, with the Branch grouping being experimentally slower because the learning procedure needs more iterations to converge when the number of parameters increases (due to the max norm adopted here as stopping criterion). Nevertheless, we remark that no thresholding on the matrix has been applied in both cases; thus, in the same model even tasks with small similarities can be included, which in principle might introduce noise in the learning and inference processes. Consequently, the advantage of jointly learning a larger number of similar tasks can be compensated by this potential noise; investigating other task grouping and similarity thresholding strategies could thereby give rise to further insights about model, which for lack of room we destine to future study.
Configuration  AUC  AUPR 

Branch,  0.961  0.439 
Card,  0.959  0.439 
Card, ,  0.959  0.431 
Card, ,  0.810  0.204 
Card, ,  0.811  0.204 
Card,  0.937  0.312 
Regarding the impact of parameter , regulating the effect of dynamics regularization, a strong decay in performance is obtained when no regularization is applied (): this confirms the tendency of the network trajectory to be attracted in some limit cases by trivial fixed points, already observed in the singletask Hopfield model [13]. In this experiment, the contribution of regularization is still more dominating, since it allows to double the AUPR performance.
Less impact on the performance apparently has the parameter , regulating the multitask energy term (Eq. (3)). Indeed, the performance reduces just of when ; however, this behaviour should be further studied, because it can be strictly related to the noise we introduced by grouping tasks without filtering out connections between less similar task. Thus, further experiments with different organisms would help this analysis and potentially reveal novel and more clear trends. It is also important noting that by setting , the overall multitask contribution is not cancelled: the learning procedure, by maximizing criterion (10), still learns tasks jointly, even when the multitask contribution in formula (9) is removed. For instance, choosing equal to the minimum function would mean learning individual task parameters in order to maximize the minimum performance () across tasks, even when .
Finally, the function itself seems having a marked impact on the model. When using the mean function () the AUPR decreases of around with respect to the AUPR obtained using the harmonic mean (). To some extent such a result is expected, since the harmonic mean tends to penalize more the outliers towards , thus fostering the learning procedure to estimate the parameters in order not to penalize some tasks in favors of the remaining ones, which instead can happen when using the mean function.
This preliminary analysis about the model suggested to adopt the configuration “Card, ” in the comparison with the stateoftheart methodologies, which is described in the next section.
3.4 Model performance
We compared our method with several stateoftheart graphbased algorithms, ranging from singletask Hopfield networks and other multitask methodologies, to some methods specifically designed to predicting protein functions: RW, random walk [30], the classical
step random walk algorithm, predicting a score corresponding to the probability that a
step random walk in , starting from positive nodes, ends in the node to be predicted; RWR, random walk with restart, since in RW after many steps the walker may forget the prior information coded in the initial probability vector (0 for negative nodes for positive nodes), RWR allows the walker to move another random walk step with probability , or to restart from its initial condition with probability ; GBA, guiltbyassociation [34], a method based on the assumption that interacting proteins are more likely to share similar functions; LP, label propagation [43], a popular semisupervised learning algorithm which propagates labels to unlabeled nodes through an iterative process based on Gaussian random fields over a continuous state space;
MTLP, MTLPinv [14], two recent multitask extensions of LP, exploiting task dissimilarities (MTLP) and similarities (MTLPinv);MSkNN, MultiSource kNearest Neighbors
[29], a method based on the Nearest Neighbours (NN) algorithm [1], among the topranked methods in the recent CAFA2 international challenge for AFP [25]; RANKS [36], a recent graphbased method proposed to rank proteins, adopting a suitable kernel matrix to extend the notion of node similarity also to non neighboring nodes; COSNet. We used the same approach in [18] to compute a node ranking, necessary to calculating both AUC and AUPR.In Table 2 we show the obtained results. Our method achieves the highest AUPR in all the experiments, with statistically significant difference over the second top method (RWR) in out of experiments (Wilcoxon signed rank test, ). The performance improvement compared with COSNet is noticeable, showing the remarkable contribution supplied by our multitask extension. Interestingly, MTLP and MTLPinv increase the AUPR results of LP not so remarkably as HoMTask: this means that the further information regarding task similarities should be appropriately exploited in order to achieve relevant gains. RANKS is the third method in all experiments, followed by MTLP(inv), while MSNN is surprisingly the last method. Our method achieves good results also in terms of AUC (which however is less informative in this context), being close to top performing methods (RWR on CC and MF, and MTLPinv on BP terms).
RW  RWR  GBA  LP  MTLP  MTLPinv  MSkNN  RANKS  COSNet  HoMTask  

AUC  
CC  0.954  0.966  0.944  0.964  0.957  0.964  0.790  0.958  0.904  0.959 
MF  0.934  0.955  0.931  0.951  0.939  0.953  0.742  0.945  0.859  0.945 
BP  0.943  0.959  0.935  0.955  0.947  0.961  0.764  0.949  0.855  0.954 
AUPR  
CC  0.367  0.437  0.207  0.308  0.343  0.342  0.218  0.398  0.361  0.439 
MF  0.199  0.272  0.125  0.201  0.229  0.234  0.090  0.236  0.214  0.291 
BP  0.244  0.313  0.145  0.224  0.246  0.250  0.116  0.271  0.241  0.330 
Conclusions
We have proposed the first multitask Hopfield Network for classification purposes, HoMTask, capable to simultaneously learn multiple tasks and to cope with the label imbalance. In our validation experiments, it significantly outperformed singletask HNs, and favorably compared with stateoftheart single and multitask graphbased methodologies. Future investigations might reveal novel insights about the model, in particular regarding the choice of the task relatedness matrix, the task grouping strategy, the multitask criterion to be optimized during the learning phase, and the optimization procedure itself.
References
 [1] Altman, N.S.: An Introduction to Kernel and NearestNeighbor Nonparametric Regression. The American Statistician 46(3), 175–185 (1992)
 [2] Ando, R.K., Zhang, T.: A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res. 6, 1817–1853 (2005)

[3]
Argyriou, A., Evgeniou, T., Pontil, M.: Convex multitask feature learning. Machine Learning
73(3), 243–272 (2008)  [4] Argyriou, A., et al.: A spectral regularization framework for multitask structure learning. In: Advances in Neural Inf. Proc. Sys. pp. 25–32 (2007)
 [5] Ashburner, M., et al.: Gene ontology: tool for the unification of biology. the gene ontology consortium. Nature genetics 25(1), 25–29 (2000)

[6]
Bertoni, A., Frasca, M., Valentini, G.: COSNet: a cost sensitive neural network for semisupervised learning in graphs. In: ECML PKDD 2011. Lecture Notes on Artificial Intelligence, vol. 6911, pp. 219–234. Springer (2011).
https://doi.org/10.1007/978364223780524  [7] Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
 [8] Chen, J., Zhou, J., Ye, J.: Integrating lowrank and groupsparse structures for robust multitask learning. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 42–50. ACM (2011)
 [9] Daumé III, H.: Bayesian multitask learning with latent hierarchies. In: Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence. pp. 135–142. AUAI Press (2009)
 [10] Evgeniou, A., Pontil, M.: Multitask feature learning. Advances in Neural Inf. Proc. Sys 19, 41 (2007)
 [11] Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. J. Mach. Learn. Res. 6, 615–637 (2005)
 [12] Evgeniou, T., Pontil, M.: Regularized multi–task learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 109–117. KDD ’04, ACM (2004)
 [13] Frasca, M., Bertoni, A., et al.: A neural network algorithm for semisupervised node label learning from unbalanced data. Neural Networks 43(0), 84 – 98 (2013)
 [14] Frasca, M., CesaBianchi, N.: Multitask protein function prediction through task dissimilarity. IEEE/ACM Trans. on Comp. Biology and Bioinf. pp. 1–1 (2018). https://doi.org/10.1109/TCBB.2017.2684127
 [15] Frasca, M.: Gene2disco: Gene to disease using disease commonalities. Artificial Intelligence in Medicine 82, 34 – 46 (2017). https://doi.org/10.1016/j.artmed.2017.08.001
 [16] Frasca, M., Bassis, S., Valentini, G.: Learning node labels with multicategory hopfield networks. Neural Computing and Applications 27(6), 1677–1692 (2016). https://doi.org/10.1007/s0052101519651
 [17] Frasca, M., Bertoni, A., Sion, A.: A Neural Procedure for Gene Function Prediction, pp. 179–188. Neural Nets and Surroundings,Springer Berlin Heidelberg (2013). https://doi.org/10.1007/9783642354670_19
 [18] Frasca, M., Pavesi, G.: A neural network based algorithm for gene expression prediction from chromatin structure. In: IJCNN. pp. 1–8. IEEE (2013). https://doi.org/10.1109/IJCNN.2013.6706954
 [19] Greene, W.H.: Econometric Analysis. Prentice Hall, 5. edn. (2003)
 [20] Guo, S., Zoeter, O., Archambeau, C.: Sparse bayesian multitask learning. In: Advances in Neural Inf. Proc. Sys. pp. 1755–1763 (2011)
 [21] Hopfield, J.J.: Neural networks and physical systems with emergent collective compatational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982)
 [22] Hu, X., Wang, T.: Training the hopfield neural network for classification using a stdplike rule. In: Neural Information Processing. pp. 737–744. Springer (2017)
 [23] Jacob, L., Vert, J.p., Bach, F.R.: Clustered multitask learning: A convex formulation. In: Advances in Neural Inf. Proc. Sys. pp. 745–752 (2009)
 [24] Jacyna, G.M., Malaret, E.R.: Classification performance of a hopfield neural network based on a hebbianlike learning rule. IEEE Transactions on Information Theory 35(2), 263–280 (March 1989). https://doi.org/10.1109/18.32122
 [25] Jiang, Y., Oron, T.R., et al.: An expanded evaluation of protein function prediction methods shows an improvement in accuracy. Genome Biology 17(1), 184 (2016)
 [26] Kang, Z., Grauman, K., Sha, F.: Learning with whom to share in multitask feature learning. In: Proceedings of the 28th International Conference on Machine Learning (ICML11). pp. 521–528 (2011)
 [27] Karaoz, U., et al.: Wholegenome annotation by using evidence integration in functionallinkage networks. Proc. Natl Acad. Sci. USA 101, 2888–2893 (2004)
 [28] Kordos, M., Duch, W.: Variable step search algorithm for feedforward networks. Neurocomput. 71(1315), 2470–2480 (2008)
 [29] Lan, L., Djuric, N., Guo, Y., S., V.: MSkNN: protein function prediction by integrating multiple data sources. BMC Bioinformatics 14(Suppl 3:S8) (2013)
 [30] Lovász, L.: Random walks on graphs: A survey. In: Miklós, D., Sós, V.T., Szőnyi, T. (eds.) Combinatorics, Paul Erdős is Eighty, vol. 2, pp. 353–398. Budapest (1996)
 [31] Mostafavi, S., Morris, Q.: Fast integration of heterogeneous data sources for predicting gene function with limited annotation. Bioinfo. 26(14), 1759–1765 (2010)
 [32] Ning, X., Karypis, G.: Multitask Learning for Recommender System. In: Proc. 2nd Asian Conf. on Mac. Lear. (ACML2010). vol. 13, pp. 269–284 (2010)
 [33] Saito, T., Rehmsmeier, M.: The precisionrecall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 10, e0118432 (2015)
 [34] Schwikowski, B., Uetz, P., Fields, S.: A network of proteinprotein interactions in yeast. Nature biotechnology 18(12), 1257–1261 (Dec 2000)
 [35] Szklarczyk, D., et al.: String v10: protein–protein interaction networks, integrated over the tree of life. Nucleic Acids Research 43(D1), D447–D452 (2015)
 [36] Valentini, G., et al.: RANKS: a flexible tool for node label ranking and classification in biological networks. Bioinformatics (2016)

[37]
Vascon, S., Frasca, M., Tripodi, R., Valentini, G., Pelillo, M.: Protein function prediction as a graphtransduction game. Pattern Recognition Letters (2018)
 [38] Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multitask learning for classification with Dirichlet process priors. J. of Mach. Learn. Res. 8, 35–63 (2007)
 [39] Yu, K., Tresp, V., Schwaighofer, A.: Learning Gaussian process from multiple tasks. In: Proc. of the 22nd Int. Conf. on Mac. lear. pp. 1012–1019. ACM (2005)
 [40] Yu, S., Tresp, V., Yu, K.: Robust multitask learning with tprocesses. In: Proc. of the 24th Int. Conf. on Machine learning. pp. 1103–1110. ACM (2007)
 [41] Zhang, Y., Schneider, J.G.: Learning multiple tasks with a sparse matrixnormal penalty. In: Advances in Neural Inf. Proc. Sys. pp. 2550–2558 (2010)
 [42] Zhou, J., Chen, J., Ye, J.: Clustered multitask learning via alternating structure optimization. In: Advances in Neural Inf. Proc. Sys. pp. 702–710 (2011)
 [43] Zhu, X., et al.: Semisupervised learning with gaussian fields and harmonic functions. In: Proc. of the 20th Int. Conf. on Machine Learning (2003)
Comments
There are no comments yet.