I Introduction
In the presence of large withinclass variations, pattern classification techniques typically require a sufficiently large and representative set of training data to provide reasonable generalisation performance. With the growing complexity of the learning problems and the associated decisionmaking systems, the need for larger sets of training data is accentuated even further. While there exist applications where data is abundantly available, there are other situations where the number of training observations relative to the complexity of the learning machine may not be readily increased. Such situations arise when the classification system is quite complex whereas the cost of collecting training samples is relatively high or samples are rare by nature. In other cases where sufficient training observations are accessible, for effective training, multiple passes through the available samples may be required, increasing the complexity of the learning stage. The problem is also manifest in the settings where a large number of training observations might exist but they fail to capture the real distribution of the underlying phenomena. Degeneration of training data when combined with the limitations of learning systems may lead to a suboptimal performance in certain problems. Although other alternatives exist, in these situations, sharing knowledge among multiple tasks via the multitask learning (MTL) paradigm has been found to be an effective strategy to improve performance when individual problems are in some sense related [1]. Sharing knowledge among multiple problems may enhance the generalisation performance of individual learners, reduce the required number of training samples or the number of learning cycles needed to achieve a particular performance level by exploiting commonalities/differences among different problems. As such, MTL is known to be an effective mechanism to inductive transfer which enhances generalisation by exploiting the domain information available in the training signals of individual problems as an inductive bias [2]. This objective is typically achieved by learning multiple tasks in parallel while using a shared representation.
Although other strategies exit, the MTL approach may be cast within the framework of the reproducing kernel Hilbert space for vectorvalued functions (RKHSvv) [3]. In this context, the problem may be viewed as learning vectorvalued functions where each vector component is a realvalued function corresponding to a particular task. In the RKHSvv, the relationship between multiple inputs and the outputs is modelled by means of a positive definite multitask kernel [4]. A plausible and computationally attractive simplification of this methodology is offered by the separable kernel learning strategy, assuming a multiplicative decomposition of a multitask kernel in terms of a kernel on the inputs and another on task indices [4, 5, 6]. In this formalism, the input and outputs are decoupled in the sense that the input feature space does not vary by task and the structure of different tasks is solely represented through the corresponding output kernel. Since a decomposition of the multitask kernel as a product of a kernel on the input and another on the output facilitates the optimisation of the kernel on the task indices simultaneously with learning the predictive vectorvalued function, it is widely applied as a kernelbased approach to model learning problems with multiple outputs.
The MTL strategy has been successfully applied in a variety of different problems . A relatively challenging classification problem, among others, is known to be oneclass classification (OCC) [7]. OCC is defined as the problem of identifying patterns which conform to a specific behaviour (known as normal/target patterns) and distinguishing them from all other observations (referred to as novelties/anomalies, etc.). The interest in oneclass learning is fuelled, in part, by the observation that very often a closed form definition of normality does exist whereas typically no such definition for an anomalous condition is available. While oneclass classification forms the backbone of a wide variety of applications [8, 9, 10, 11, 12, 13, 14], it usually suffers from a lack of representative training samples. The complexity of the problem is mainly due to the difficulty of obtaining nontarget samples for training or their propensity to appear in unpredictable novel forms during the operational phase of the system. These adversities associated with the OCC problem make it a suitable candidate to benefit from a multitask learning strategy. While there exist some previous effort on utilising tasks’ structures in designing oneclass classification methods [15, 16, 17, 18]
, they typically rely on different flavours of the support vector machines paradigm. A plausible alternative to the SVM formulation is that of regularised regression
[19]. The major challenges in the context of multitarget regression are known to be due to jointly modelling intertarget correlations and nonlinear inputoutput relationships [20]. By exploring the shared knowledge across relevant targets to capture the intertarget correlations in a nonOCC setting, the performance of multitarget regression has been shown to improve [21, 22, 20, 23, 24]. In practice, however, multiple outputs represent higher level concepts which generate highly complex relationships, that call for powerful nonlinear regression models, commonly formulated in the reproducing kernel Hilbert space. However, even in a general context beyond the OCC paradigm, despite relying on a Hilbert space formulation, very often the relationship among multiple tasks in the existing multitask regression methods is captured in a
linear fashion and represented in terms of an output mixing matrix, which limits the representational capacity of such methods.In the current study, the kernel nullspace technique for oneclass classification [25, 26, 27], and in particular, its regressionbased formulation known as oneclass kernel spectral regression (OCKSR) [28, 29] is extended to a multitask learning framework. The OCKSR method as compared with other alternatives has been found to provide better performance and computational efficiency while being more resilient against data corruption. In the context of the OCKSR method, we illustrate that the relationship among multiple related OCC problems may be captured effectively by learning related tasks concurrently based on the notion of separability of the multitask kernel. To this end, multiple oneclass learning problems are modelled as the components of a vectorvalued function while learning their structure corresponds to the choice of suitable functional spaces.
Ia Overview of the proposed approach
As noted earlier, in this work, the kernel regressionbased formulation of the Fisher nullspace technique for the oneclass classification problem is extended to benefit from a multitask learning strategy. For this purpose, owing to the regressionbased formulation of the OCKSR method, first, it is shown that the kernel decomposition approach for learning vectorvalued functions in the Hilbert space is directly applicable to the OCKSR methodology which in turn facilitates learning a predictive oneclass vectorvalued function and a linear structure among multiple tasks, concurrently. Next, as a second contribution of the present work and in contrary to the common methods which model intertarget relations linearly in terms of an output matrix, a new nonlinear multitask structure learning method is proposed where the relationship among multiple OCC problems is encoded via a nonlinear kernel function. Taskspecific coefficients, as well as output mixing parameters are then learned concurrently via a new alternating direction block minimisation method. Finally, it is shown that the proposed nonlinear approach for oneclass vectorvalued function learning may be naturally extended to a sparse representation framework where different tasks compete in a sparse nonlinear multitask structure. To summarise, the main contributions of the current study may be outlined as:

A separable kernel learning approach for multitask Fisher nullspace oneclass classification where the structure among multiple problems is captured linearly in terms of an output composition matrix;

A nonlinear multitask Fisher nullspace oneclass learning approach where the structure among multiple problems is modelled nonlinearly through a kernel function;

Extension of the nonlinear multitask structure learning mechanism to a sparse setting where the structure among multiple problems is encoded in a sparse fashion;

And, an experimental evaluation and comparison between different variants of the proposed multitask oneclass learning paradigm as well as other existing approaches on different datasets.
IB Outline of the paper
The rest of the paper is organised as follows: a summary of the existing work on the multitask oneclass learning problem as well as a brief overview of nonOCC multitarget regression approaches, most relevant to the current study, is provided in §II. In §III, once an overview of the oneclass kernel spectral regression method for oneclass learning [28, 29] is presented, the vectorvalued function learning methodology, with an emphasis on separable kernel learning in the RKHSvv, is briefly reviewed. The proposed multitask oneclass kernel nullspace approach is introduced in §IV where the linear and nonlinear structure learning mechanisms subject to Tikhonov as well as sparse regularisation are presented. An experimental evaluation of the proposed structure learning methods on different datasets is carried out in §V along with a comparison against the baseline as well as other existing approaches in the literature. Finally, §VI offers brief conclusions.
Ii Related Work
In this section, a brief overview of the existing multitask oneclass learning approaches is presented. A number of nonOCC multitarget regression methods, related to the present work, shall be briefly reviewed too. For a detailed review on the general concept of multitask learning the reader is referred to [1].
As an instance of the multitask learning approaches for OCC, in [15], based on the assumption of closeness of the related tasks and proximity of their corresponding models, two multitask learning formulations based on oneclass support vector machines are presented. Both multitask learning methods are then solved by optimising the objective function of a single oneclass SVM. Other work in [16], presents a multitask approach to include additional new features into a oneclass classification task. Based on the theory of support vector machines, a new multitask learning approach is proposed to deal with the training of the updated system. In [17], based on the oneclass SVM, an MTL framework for oneclass classification is presented which constrains different problems to have similar solutions. Such a formulation is cast into a secondorder cone programme to derive a global solution. In [18]
, the authors propose a method for anomaly detection when collectively monitoring many complex systems. The proposed multitask learning approach is based on a sparse mixture of Gaussian graphical models (GGM’s) where each task is represented by a mixture of GGM’s providing the functionality to handle multimodalities. A new regularised formulation is then proposed with guaranteed sparsity in mixture weights. By introducing the vectorvalued function subject to regularisations in the vectorvalued reproducing Hilbert kernel space, an unsupervised classifier to detect the outliers and inliers simultaneously is proposed in
[19] where preserving the local similarity of data in the input space is encouraged by manifold regularisation.In the general context of multitarget regression and apart from the oneclass classification paradigm, there exist a variety of different methods. These methods are not directly related to the present work as they do not solve a oneclass classification problem. Nevertheless, similar to the present work, in these methods, the multitask learning problem is formulated as one of kernel regression. As an instance, in [21], an output kernel learning method based on the solution of a suitable regularisation problem over a reproducing kernel Hilbert space of vectorvalued functions is proposed. A blockwise coordinate descent method is then derived that efficiently exploits the structure of the objective functional. Other work in [22], addresses the MTL problem by illustrating that multiple tasks and their structure can be efficiently learned by formulating the problem as a convex optimisation problem which is solved by means of a block coordinate method. More recently, in [20], a multitarget regression approach via robust lowrank learning is proposed. The proposed approach can encode intertarget correlations in a structure matrix by matrix elastic nets. Other method [23] models intrinsic intertarget correlations and complex nonlinear inputoutput relationships via multitarget sparse latent regression where intertarget correlations are captured via normbased sparse learning. Other work [30] presents a two layer approach to jointly learn the latent shared features among tasks and a multitask model based on Gaussian processes. In [24], in order to take into account the structure in the input data while benefiting from kernels in the input space, the reproducing kernel Hilbert space theory for vectorvalued functions is applied. In [31]
, the objective for multitask learning is formulated as a linear combination of two sets of eigenfunctions, such that the eigenfunctions for one task can provide additional information on another and help to improve its performance. For a detailed review on multitarget regression one may consult .
Iii Background
Iiia OneClass Kernel Spectral Regression
The Fisher criterion is a widely applied design objective in statistical pattern classification where a projection function from the input space into a feature space is inferred such that the betweenclass scatter of the data is maximised while minimising the withinclass scatter:
(1) 
where denotes the betweenclass scatter matrix, stands for the withinclass scatter matrix and is a basis defining one axis of the subspace. A theoretically optimal projection which provides the best separability with respect to the Fisher criterion is the null projection [28, 25, 26], yielding a positive betweenclass scatter while providing a zero withinclass scatter:
(2) 
In a oneclass classification problem, the single optimiser for Eq. 1
is found as the eigenvector corresponding to the largest eigenvalue of the following eigenproblem:
(3) 
Having determined the null projection direction, a sample is projected onto the nullspace as
(4) 
In order to handle data with inherently nonlinear structure, kernel extensions of this methodology are proposed [28, 25, 26]. While solving for the discriminant in a kernel space requires eigenanalysis of dense matrices, a computationally efficient method (oneclass kernel spectral regression, a.k.a. OCKSR) based on spectral regression is proposed in [28] which poses the problem as one of solving a regularised regression problem in the Hilbert space:
(5) 
where is a regularisation parameter, denotes the desired responses and stands for the kernel matrix. The optimal solution to the problem above is given as
(6) 
where
denotes an identity matrix of size
( being the number of training samples). Once is determined, the projections of samples onto the null feature space are found as . For classification, the distance between the projection of a test sample and that of the mean of the target class is employed as a dissimilarity criterion.In a singletask OCKSR approach, the procedure starts with building a separate kernel matrix for each oneclass classification problem followed by assigning optimal responses for each individual observation in each task. The optimal response vector in the OCKSR algorithm when only positive instances are available for training is shown to be a vector of ones (up to a scale factor). When negative training observations are also available, they are mapped onto the origin .
IiiB Vectorvalued functions in the Hilbert space
Let us assume there are scalar learning problems (tasks), each associated with a training set of inputoutput observations with input space and output space and
indexing a task. Given a loss function
that measures the pertask prediction errors, in the problem of learning vectorvalued functions in the Hilbert space, one is interested in a vectorvalued function which jointly minimises the regularised errors corresponding to multiple learning problems, i.e. , where is defined as(7) 
denotes a regularisation on the function with scalar components in the Hilbert space. A popular subclass of vectorvalued function learning methods in the Hilbert space is that of multitarget kernel regression problem where the loss function encodes a least squares lost in the Hilbert space. A commonly applied simplifying assumption in this direction is the separability of inputoutput relations which leads to an expression of the function in terms of a separable kernel. Separable kernels are functions of the form , where is a scalar reproducing kernel that captures similarities between the inputs and is a symmetric positive semidefinite matrix encoding dependencies among the outputs. In this case, is represented as
(8) 
where stands for taskspecific coefficients. The output on the training data shall then be derived as and the regularised loss given in Eq. 7 may be expressed in a matrix form as
(9) 
where denotes the kernel matrix for the inputs, () stands for a matrix of the coefficient vectors and denotes a matrix collection of the expected responses while denotes the Frobenius norm. For this class of kernels, if is the identity matrix, all outputs would be treated as being unrelated and the solution to the multitask problem will be similar to that of solving each task independently. When the output structure matrix is presumed to be other than the identity matrix, the tasks are regarded as related and finding the optimal function is posed as learning the matrices and , concurrently, subject to suitable regularisation constraints. The generic form of expressed in Eq. 9 may be considered as the common formulation to the multitarget regression problem in the Hilbert space where the choice of the regularisation function leads to different instantiations of the problem. With reference to the separable kernel learning formulation for multitask learning, one may interpret the output as finding the intermediate responses corresponding to each individual task via (similar to the OCKSR approach) and then mixing them via a structure encoding mechanism to produce the final responses. From this standpoint, the final responses may be considered as the output of a composition function , where produces the intermediate responses while performs a composition on the intermediate responses to derive the final output. From a compositional function perspective, the relations in Eqs. 8 and 9, correspond to a nonlinear mapping function expressed in terms of and , while the linear function is defined as a linear mixing function, characterised via matrix . The majority of the existing work on multitask structure learning is focused on the case where is a linear function. In this work, we study the problem of jointly learning multiple oneclass classification problems by modelling individual taskpredictors as the components of a vectorvalued function. In doing so, the utility of the compositional function view is demonstrated for learning structures among multiple OCC problems. For this purpose, we consider two general cases: 1when the function is a linear function, we refer to the structure among multiple problems as a linear structure; and 2when is a nonlinear function, the structure would be referred to as a nonlinear structure. Note that for both alternative scenarios above, function is assumed to be a nonlinear function, defined in a Hilbert space. For a general Representer Theorem regarding compositional functions in the Hilbert space the reader is referred to [32].
Iv MultiTask OneClass Kernel NullSpace
In this section, first, the proposed multitask oneclass learning method for linear structure learning is introduced. The discussion is then followed by presenting a nonlinear structure learning approach based on Tikhonov regularisation which is then modified to learn sparse nonlinear multitask structures.
Iva Linear Structure Learning
In the linear set up of the proposed multitask oneclass learning method (i.e. when is a linear function), following the formulation in Eq. 9, once the intermediate responses corresponding to different tasks are determined, they are mixed via an output matrix to produce the final responses. The key to the deployment of the cost function in Eq. 9 in the context of oneclass classification based on the OCKSR approach is that Eq. 9 is quite general with no restriction imposed on the responses . The only requirement for to characterise a kernel nullspace oneclass classifier is that of suitable choices for the responses . For this purpose and in order to be consistent with the OCKSR setting, a suitable choice for is the one which forces all target observations to be mapped onto a single point distinct from the projection of any possible nontarget samples. Choosing as such would then lead to a zero withinclass scatter while providing a positive betweenclass scatter, i.e. a null projection function. The learning machine induced by Eq. 9 admits a multilayer structure where the second layer parameter encodes a linear structure among multiple tasks whereas the first layer coefficients represent a collection of taskspecific parameters, Fig. 1. The goal is then to concurrently learn the coefficient matrix and the structure encoding matrix subject to suitable regularisations on and .
While there exists different methods characterised by different regularisations on the solution, recently, an effective approach is presented in [20] to control the rank and shrinkage of while penalising the norm of in the Hilbert space. The advocated cost function in [20] is defined as
(10) 
For the optimisation of the objective function, similar to other relevant approaches, a block coordinate descent method is suggested in [20], alternating between optimisation w.r.t. the parameters of the first layer and those of the second layer.
IvA1 Subproblem w.r.t.
The first block of variables for the minimisation of the objective function is that of . In order to optimise with respect to , the partial derivative of with respect to is set to zero:
(11) 
A sufficient condition for the above equality to hold is
(12) 
The linear matrix equation above is known as the discretetime Sylvester equation, commonly arisen in control theory [33]. The solution to is given as
(13) 
where stands for the Kronecker product and denotes a concatenation of the columns of a matrix into a vector. For largescale problems, the solution above may be inefficient. In these cases, by utilising the structure of the problem, more efficient techniques have been devised ^{1}^{1}1www.slicot.org.
IvA2 Subproblem w.r.t.
For the minimisation of the error function with respect to , the work in [20] proposed a gradient descent approach:
(14) 
where is the step size parameter and is derived as
(15) 
where is an eigendecomposition of structure matrix and is the matrix of elementwise absolute values of . For a detailed derivation of one may consult [20]. An advantage of this approach over other alternatives lies in the convexity of the objective function which facilitates reaching the global optimum. Optimisation of the objective function with respect to the unknown parameters and is then realised via an alternating direction minimisation approach, summarised in Algorithm 1, where during the initialisation step, all tasks are deemed to be independent. That is, the structure matrix is initially set to identity. A number of observations regarding the proposed oneclass multitask linear structure learning approach based on the OCKSR method are in order. First, it should be noted that the structure in Fig. 1 depicts the learning stage of the proposed oneclass model. In the operational (test) phase, however, the parameter sets and may be combined to produce a model with a single layer of discriminants in the Hilbert space as . As noted earlier, the structure considered in Fig. 1 is not new and has been previously explored in the context of multitarget regression. The novelty here, lies in posing the kernel nullspace oneclass classification approach in this context to benefit from the same learning structure, thanks to a kernel regressionbased formulation of the OCKSR approach.
IvB Nonlinear Structure Learning
In the proposed nonlinear structure learning scheme and in contrast to the linear setup, the relations among multiple tasks is modelled via a nonlinear (kernel) approach. The structure of the learning machine proposed in this work for this purpose is illustrated in Fig. 2. In this setting, once the intermediate responses corresponding to different tasks (’s, for ) for a given input are produced, they collectively serve as a single input () to the second layer. In the second layer, is then nonlinearly mapped into a new space, induced by a kernel function (RBF kernel) and ultimately mixed via the coefficients to produce the final responses corresponding to different tasks. The training data for the second layer thus consists of dimensional intermediate responses. In the proposed nonlinear structure learning method, the unknown matrices and are found by optimising an objective function defined as a regularised kernel regression based on a kernel matrix which captures the similarities between outputs of different tasks. The superiority of the nonlinear model as compared with the conventional linear structure of Fig. 1 (as will be verified in the experimental evaluation section) may be justified from the perspective that the structure in Fig. 1 acts a linear regression of the intermediate responses while that of Fig. 2 corresponds to a nonlinear (kernel) regression. Different regularisations in the proposed nonlinear setting, namely Tikhonov and sparsity, are examined and discussed next.
IvB1 Tikhonov regularisation
A Tikhonov regularisation, in general, favours models that provide predictions that are as smooth functions of the intermediate responses as possible by penalising parameters of larger magnitude and thereby producing a more parsimonious solution. Following a Tikhonov regularised regression formulation in the Hilbert space, the objective function for the model in Fig. 2 is defined as
(16) 
where and denote the kernel matrices associated with the first and the second layer, respectively. The optimisation of the objective function associated with the nonlinear model is realised via a block coordinate descent scheme, alternating between optimisation w.r.t. the parameters of the first layer and those of the second layer.
Subproblem w.r.t. : The first direction of minimisation for the objective function is that of . The partial derivative of w.r.t. is readily obtained as
(17) 
Denoting the remaining terms of as , we shall proceed with computing its partial derivative w.r.t. . The parameters involved in are independent from except for the kernel matrix . The dependency of the kernel matrix on is due to its dependency on the intermediate responses which is a function of as . In order to compute the partial derivative of w.r.t. , first, the following matrices are defined:
where stands for the Hadamard (componentwise) product and denotes a matrix of ones. The kernel matrix associated with the second layer may then be expressed as
(19) 
where the scalar parameter controls the RBF kernel width associated with the second layer.
The partial derivative of with respect to the kernel matrix is
(20) 
The partial derivatives , , are derived as [34]
(21) 
For the computation of , from the differentiation of a scalarvalued matrix function it is known that
(22) 
Since , it holds that . Replacing by in Eq. 22 yields
(23) 
and hence
(24) 
In summary, in order to compute , one first computes and then , and , respectively, followed by . Finally, .
Subproblem w.r.t. : Minimising the regularised error over multiple tasks represented by w.r.t may be performed by setting the partial derivative to zero:
which yields
(25) 
Finally, the partial derivative of the objective function with respect to is given as
(26) 
Initialisation: The initialisation step of the proposed nonlinear structure learning model is similar in spirit to that of the linear case. That is, during the initialisation stage, each task is presumed as independent from all others. Based on this assumption, the kernel matrix encoding intertask relationships takes the form of a blockdiagonal matrix, the diagonal elements of which are submatrices. Such an initialisation leads to an initialisation of as . The parameter controlling the width of the Gaussian kernel in the second layer () is initialised to the reciprocal of the average of . For the initialisation of , the problems are solved independently. Once all the parameters are initialised, the optimisation of the objective function with respect to the parameters of the first and the second layer is performed via an alternating direction minimisation scheme where for optimisation with respect to and a gradient descent method is applied. The algorithm for the nonlinear multitask oneclass learning is summarised in Algorithm 2 where and denote the gradient descent step sizes for and , respectively. Note that in step 6 of the algorithm, the kernel matrix associated with the second layer is updated based on the most recent values for and .
IvB2 Sparse regularisation
Apart from the widely used Tikhonov regularisation, other regularisation schemes encouraging sparseness on the solution are widely applied as a guideline for inference. The underlying motivation in this case is to provide the simplest possible explanation of an observation as a combination of as few as possible atoms from a given dictionary. A more compact model is expected to provide better performance as compared with its nonsparse counterpart, especially in the presence of corruption in data or missing relations between some problems. The sparsity in the proposed nonlinear structure learning approach may be imposed at two different levels. The first level of sparsity is that of the task level: i.e. a task either contributes in forming the discriminant of another task (the tasks related) or not. The second level of sparsity is that of the withintask sparsity where the response for a particular problem is derived as a sparse representation of the corresponding training data. The two objectives above may be achieved via a sparse group lasso formulation [35, 36] by enforcing an norm penalty on each element of in addition to an norm taskwise penalty on . Consequently, the objective function for the sparse nonlinear setting is defined as
(27) 
where controls the withintask sparsity while governs taskwise sparsity. As a result, in the proposed sparse multitask oneclass learning approach, each response may be generated using only a few tasks from among the pool of multiple problems while at the same time using a sparse set of training observations.
The algorithm for the sparse nonlinear multitask oneclass learning approach is similar to Algorithm 2 except for two differences. First, when optimising w.r.t. , the partial derivative of would be
(28) 
Second, in order to optimise the sparse grouplasso problem in Eq. 27 w.r.t. , the Sparse Learning with Efficient Projections (SLEP) algorithm [35] is used in this work. Using the SLEP algorithm, and by varying the regularisation parameters and solutions with different possible withintask or taskwise cardinalities on may be obtained. The proposed sparse nonlinear structure learning algorithm is summarised in Algorithm 3.
IvC Analysis of the algorithms
A few comments regarding the dynamics of the proposed nonlinear (Tikhonov/sparse) multitask learning approaches are in order.
While in the linear structure learning method (Algorithm 1), the impact of changing one block of parameters on the other (the impact of on or vice versa) is explicit, in the nonlinear setting (Algorithms 2 and 3) the two sets of parameters and interact indirectly via the kernel matrix associated with the second layer, i.e. via (see step 6 of the Algorithms 2 and 3). Recall that the kernel matrix associated with the second layer captures the similarities among multiple problems. In this respect, once is updated, the intermediate responses are produced as . The new kernel matrix associated with the second layer may then be computed using the updated and the new parameter . is then derived based on the updated kernel matrix . Any modification to would then affect parameter set by making changes to (see Eqs. 20 and 28).
In the operation phase of the proposed nonlinear structure learning methods, upon the arrival of a new test sample , the corresponding intermediate outputs (’s for ) for different problems are produced by passing through the first layer. Treating the intermediate responses as the components of a single vector , its similarity is measured to those of training samples associated with the second layer (i.e. ’s for ) using a a kernel function and subsequently combined via the corresponding mixing matrix to produce the final responses.
V Experimental Evaluation
In this section, an experimental evaluation of the proposed approaches for multitask oneclass classification is carried out.
Va Data sets
The efficacy of the proposed techniques is evaluated on three data sets, discussed next.
VA1 Face
This data set is created to perform a toy experiment in face recognition. The data set contains face images of different individuals and the task is to recognise a subject among others. For each subject, a oneclass classifier is built using the training data associated with the subject under consideration while all other subjects are assumed as outliers with respect to the model. The experiment is repeated in turn for each subject in the dataset. The features used for face image representation are obtained via the frontalpose PAM deep CNN model
[37] applied to the face bounding boxes. The data set is created out of the realaccess videos of the ReplayMobile dataset [38] which is accompanied with face bounding boxes. In this work, ten subjects are used to form the data set. Each task is assumed to be the recognition of a single subject. The number of positive training samples for each subject is set to 4. The number of positive and negative test samples for each subject are 40 and 160, respectively where the negative test observations for each subject are selected randomly from subjects other than the subject under consideration.VA2 Mnist
MNIST is a collection of pixel images of handwritten digits 09 [39]. In our experiments, a single digit is considered as the target class while all others correspond to nontarget observations. The experiment is repeated in turn for all digits. Similar to the face data set, each task is assumed to be the recognition of a digit among others. The number of positive training samples for each digit is set to 15. The number of positive and negative test samples for each class are set to 150 and 1350, respectively.
VA3 Coil100
The Coil100 data set [40] contains 7,200 images of 100 different objects. Each object has 72 images taken at pose intervals of 5 degrees, with the images being of size pixels. In the experiments conducted on this data set, 50 classes are selected randomly. A oneclass classifier is then trained to recognise an object of interest from others and considered as a single task. Raw pixel intensities are used as feature representations in this data set. The number of positive train instances for each target class is 7. 65 positive and 585 negative test observations for each class are included in the experiments on this data set.
VB Methods
For the conversion of the OCKSR method from a singletask to a multitask setting, the first step is that of combining all training observations and forming a joint kernel matrix. In this case, the positive instances of one problem would serve as negative observations for all the remaining tasks. The optimal response would then be an matrix where each row of the matrix is a vector of zeros except for a single element of one corresponding to the true class of the observation. As previously demonstrated in [28], utilisation of any possible negative training samples may boost the performance of the OCKSR approach. In order to make a distinction between different variants of the OCKSR methodology, in this section, OCKSR would correspond to the algorithm when negative instances are not used for training while COCKSR shall be used to refer to the case when both positive and negative samples are used for training. This distinction is necessary to accurately gauge the benefits offered by a multitask learning scheme independent of the effects of using nontarget samples for training. A thorough evaluation and comparison between different oneclass classification algorithms has been conducted in [28] and [29] with the outcome of the OCKSR approach performing the best among other competitors. As such, different methods included in these experiments are:

OCKSR is the original singletask OCKSR method presented in [28]. This method is used to learn an OCC classifier independently for each task and will serve as a baseline.

COCKSR corresponds to the singletask OCKSR approach where negative observations are utilised for training.

OCKSRL is the proposed multitask OCKSR approach where a linear structure between different tasks is learned.

OCKSRN is the proposed multitask OCKSR approach where a nonlinear structure between different tasks subject to Tikhonov regularisation is learned.

OCKRNS is the proposed multitask OCKSR approach where a nonlinear structure between different tasks subject to sparse group regularisation is learned.

SVDD is the Support Vector Data Description approach to solve the one class classification problem [41]. As a widely used method, this method is used to learn an OCC classifier independently for each task to serves as a second baseline for comparison.

MORVR is the multioutput relevance vector regression [42]
which uses the Bayes’ theorem and the kernel trick to perform regression. The algorithm t uses the matrix normal distribution to model correlated outputs.
VC Behaviour of the Optimisation Algorithms
In this section, the effectiveness of the proposed alternating direction minimisation scheme for the nonlinear setting (for both Tikhonov and sparse regularisation) is visualised. For an analysis of the convergence behaviour of the linear structure learning method one may consult [20]. The optimisation curves depicting the cost function vs. iterations for the Tikhonov and sparse regularisation are illustrated in Fig. 3 and 4, respectively. From Figs. 3 and 4, one may observe that the proposed alternating direction approach converges within a few hundred iterations, irrespective of the nature of the observations. Interestingly, the convergence of the sparse approach seems to be relatively faster than its nonsparse counterpart.
VD Visualisation of Structure matrices
The structures learned using different linear and nonlinear multitask approaches are illustrated in Figs. 5, 6 and 7, for the linear, nonlinear and sparse nonlinear setting, respectively. For the linear learning scheme, matrix is visualised while for the nonlinear setting, the kernel matrix associated with the second layer () is illustrated. Note that for the Coil100 data set, as a relative larger number of training sample are used, the kernel matrix is bigger in dimension compared to the other two data sets. In the figure, the kernel matrix for this data sets is rescaled to a similar size as those of others for visualisation purposes. As noted earlier, for initialisation, the structural matrices are set to (block)diagonal matrices. As may be observed from the figures, for all data sets, the linear and nonlinear multitask learning approaches are successful to recover intertask relations. This may be verified as all structural matrices incorporate nonzero off (block)diagonal elements.
Method  OCKSR  COCKSR  OCKSRL  OCKSRN  OCKSRNS  SVDD  MORVR 

Face  97.70  99.55  99.71  99.78  99.76  97.69  97.63 
MNIST  89.55  96.91  97.23  97.74  97.39  89.51  95.43 
Coil100  92.08  97.32  97.40  98.87  97.95  93.18  78.27 
VE Performance Comparison
In order to gauge the efficacy of the proposed multitask OCC learning methods, multiple experiments are conducted on the face, MNIST and Coil100 datasets subject to random partitions of the data into the train and test sets in order to minimise any bias associated with the partitioning of the data. For this purpose, each experiment is repeated 10 times and the average AUC measures are reported in Table LABEL:AUCTab. A number of observations from Table LABEL:AUCTab are in order. First, the proposed multitask Fisher nullspace approaches are effective in improving the performance compared to the single task OCKSR and the COCKSR methods. Second, from among the proposed multitask learning scheme, the nonlinear learning methods perform better than their linear counterpart which demonstrates the effectiveness of the proposed nonlinear multitask structure learning mechanism. Third, the proposed OCKSRN method performs slightly better than its sparse counterpart. Nevertheless, the sparse method may provide an edge over the nonsparse variant when the data is corrupted.
VF The Effect of regularisation
In the experiments conducted thus far, the regularisation parameter corresponding to the first layer is set to 1 for all methods while the other parameters were optimised on the training set via cross validation. Typically a stronger regularisation reduces the flexibility of the model but may provide relatively more robustness against data corruption. In a final set of experiments the effect of changing the regularisation parameter of the first layer of the methods is analysed. For this purpose, the first layer regularisation parameter is chosen from . The performances of different methods in terms of AUC are reported in Fig. 8. Plots corresponding to sum of squared errors corresponding to different variants of the OCKSR method are provided in Fig. 9. Note that since the error measures corresponding to other remaining methods were higher than the OCKSR variants, they are excluded from the figure in order to better analyse the effects of multitarget learning schemes. A number of observations from the figures are in order. First, the proposed nonlinear structure learning methods perform better than other alternatives irrespective of the degree of regularisation, justifying the efficacy of a nonlinear structure learning mechanism. Second, while the linear structure learning approach OCKSRL provides an edge over the singletask COCKSR method for stronger regularisation, the advantage of learning a linear structure among multiple oneclass problems vanishes towards lower regularisations where the performances of the methods nearly match. This may be observed from both the AUC as well as the error plots. Third, although the proposed sparse nonlinear structure learning approach performs slightly inferior as compared with the nonsparse alternative for stronger levels of regularisation, nevertheless, towards lower regularisation levels it performs better than the nonsparse counterpart. Similar behaviour is observed both in terms of the AUC as well as the sum of squared error measure.
Vi Conclusion
We have studied oneclass classification based on the kernel Fisher nullspace technique (OCKSR) in a multitask learning framework. For this purpose, first, it was shown that the OCKSR approach may be readily cast within a multitarget learning approach where the dependencies among multiple tasks are modelled linearly. Next, a nonlinear structure learning mechanism was proposed where the correlations among different problems were encoded more effectively. The nonlinear multitask learning approach was then extended to a sparse setting to account for any missing relationships among different problems. The experiments verified the merits of multitask learning for the OCC problem based on OCKSR. Moreover, while in certain cases the common linear structure learning approach failed to provide advantages, the proposed nonlinear multitask learning methods maintained their edge over other alternatives.
Acknowledgment
The authors would like to thank…
References
 [1] Y. Zhang and Q. Yang, “A survey on multitask learning,” CoRR, vol. abs/1707.08114, 2017. [Online]. Available: http://arxiv.org/abs/1707.08114
 [2] R. Caruana, “Multitask learning,” Machine Learning, vol. 28, no. 1, pp. 41–75, Jul 1997. [Online]. Available: https://doi.org/10.1023/A:1007379606734
 [3] C. A. Micchelli and M. Pontil, “Kernels for multitask learning,” in Adv. Neur. Inf. Proc. Sys. (NIPS), 2004.
 [4] T. Evgeniou, C. A. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” Journal of Machine Learning Research, vol. 6, pp. 615–637, 2005.
 [5] A. Caponnetto, C. A. Micchelli, M. Pontil, and Y. Ying, “Universal multitask kernels,” J. Mach. Learn. Res., vol. 9, pp. 1615–1646, Jun. 2008. [Online]. Available: http://dl.acm.org/citation.cfm?id=1390681.1442785
 [6] L. Baldassarre, L. Rosasco, A. Barla, and A. Verri, “Vector field learning via spectral filtering,” in Machine Learning and Knowledge Discovery in Databases, J. L. Balcázar, F. Bonchi, A. Gionis, and M. Sebag, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 56–71.
 [7] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM Comput. Surv., vol. 41, no. 3, pp. 15:1–15:58, Jul. 2009.
 [8] P. Nader, P. Honeine, and P. Beauseroy, “norms in oneclass classification for intrusion detection in scada systems,” IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 2308–2317, Nov 2014.
 [9] A. Beghi, L. Cecchinato, C. Corazzol, M. Rampazzo, F. Simmini, and G. Susto, “A oneclass svm based tool for machine learning novelty detection in hvac chiller systems,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 1953 – 1958, 2014, 19th IFAC World Congress. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1474667016418999
 [10] S. Budalakoti, A. N. Srivastava, and M. E. Otey, “Anomaly detection and diagnosis algorithms for discrete symbol sequences with applications to airline safety,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 39, no. 1, pp. 101–113, Jan 2009.
 [11] S. Kamaruddin and V. Ravi, “Credit card fraud detection using big data analytics: Use of psoaann based oneclass classification,” in Proceedings of the International Conference on Informatics and Analytics, ser. ICIA16. New York, NY, USA: ACM, 2016, pp. 33:1–33:8. [Online]. Available: http://doi.acm.org/10.1145/2980258.2980319
 [12] G. G. Sundarkumar, V. Ravi, and V. Siddeshwar, “Oneclass support vector machine based undersampling: Application to churn prediction and insurance fraud detection,” in 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Dec 2015, pp. 1–7.
 [13] M. Yu, Y. Yu, A. Rhuma, S. M. R. Naqvi, L. Wang, and J. A. Chambers, “An online one class support vector machinebased personspecific fall detection system for monitoring an elderly individual in a room environment,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 6, pp. 1002–1014, Nov 2013.
 [14] A. Rabaoui, M. Davy, S. Rossignol, and N. Ellouze, “Using oneclass svms and wavelets for audio surveillance,” IEEE Transactions on Information Forensics and Security, vol. 3, no. 4, pp. 763–775, Dec 2008.
 [15] X. He, G. Mourot, D. Maquin, J. Ragot, P. Beauseroy, A. Smolarz, and E. GrallMaës, “Multitask learning with oneclass svm,” Neurocomputing, vol. 133, pp. 416 – 426, 2014. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0925231214000356

[16]
Yongjian Xue and P. Beauseroy, “Multitask learning for oneclass svm with
additional new features,” in
2016 23rd International Conference on Pattern Recognition (ICPR)
, Dec 2016, pp. 1571–1576. 
[17]
H. Yang, I. King, and M. R. Lyu, “Multitask learning for oneclass
classification,” in
The 2010 International Joint Conference on Neural Networks (IJCNN)
, July 2010, pp. 1–8.  [18] T. Idé, D. T. Phan, and J. Kalagnanam, “Multitask multimodal models for collective anomaly detection,” in 2017 IEEE International Conference on Data Mining (ICDM), Nov 2017, pp. 177–186.
 [19] S. Dang, X. Cai, Y. Wang, J. Zhang, and F. Chen, “Unsupervised matrixvalued kernel learning for one class classification,” in Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06  10, 2017, 2017, pp. 2031–2034. [Online]. Available: https://doi.org/10.1145/3132847.3133114
 [20] X. Zhen, M. Yu, X. He, and S. Li, “Multitarget regression via robust lowrank learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, pp. 497–504, Feb 2018.
 [21] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto, “Learning output kernels with block coordinate descent,” in Proceedings of the 28th International Conference on Machine Learning (ICML11), ser. ICML ’11. New York, NY, USA: ACM, Jun. 2011, pp. 49–56.
 [22] C. Ciliberto, Y. Mroueh, T. Poggio, and L. Rosasco, “Convex learning of multiple tasks and their structure,” in Proceedings of the 32Nd International Conference on International Conference on Machine Learning  Volume 37, ser. ICML’15. JMLR.org, 2015, pp. 1548–1557. [Online]. Available: http://dl.acm.org/citation.cfm?id=3045118.3045283
 [23] X. Zhen, M. Yu, F. Zheng, I. B. Nachum, M. Bhaduri, D. Laidley, and S. Li, “Multitarget sparse latent regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 5, pp. 1575–1586, May 2018.
 [24] C. Brouard, M. Szafranski, and F. d’Alché Buc, “Input output kernel regression: Supervised and semisupervised structured output prediction with operatorvalued kernels,” Journal of Machine Learning Research, vol. 17, no. 176, pp. 1–48, 2016. [Online]. Available: http://jmlr.org/papers/v17/15602.html

[25]
P. Bodesheim, A. Freytag, E. Rodner, M. Kemmler, and J. Denzler, “Kernel null
space methods for novelty detection,” in
2013 IEEE Conference on Computer Vision and Pattern Recognition
, June 2013, pp. 3374–3381.  [26] J. Liu, Z. Lian, Y. Wang, and J. Xiao, “Incremental kernel null space discriminant analysis for novelty detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 4123–4131.
 [27] P. Bodesheim, A. Freytag, E. Rodner, and J. Denzler, “Local novelty detection in multiclass recognition problems,” in 2015 IEEE Winter Conference on Applications of Computer Vision, Jan 2015, pp. 813–820.
 [28] S. R. Arashloo and J. Kittler, “Oneclass kernel spectral regression for outlier detection,” CoRR, vol. abs/1807.01085, 2018. [Online]. Available: http://arxiv.org/abs/1807.01085
 [29] ——, “Robust oneclass kernel spectral regression,” CoRR, vol. abs/1902.02208, 2019. [Online]. Available: http://arxiv.org/abs/1902.02208
 [30] P. Li and S. Chen, “Hierarchical gaussian processes model for multitask learning,” Pattern Recognition, vol. 74, pp. 134 – 144, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0031320317303746
 [31] X. Tian, Y. Li, T. Liu, X. Wang, and D. Tao, “Eigenfunctionbased multitask learning in a reproducing kernel hilbert space,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–13, 2018.
 [32] B. Bohn, M. Griebel, and C. Rieger, “A representer theorem for deep kernel learning,” 2017, accepted by Journal of Machine Learning Research. Also available as INS Preprint No. 1714.
 [33] V. Sima, Algorithms for linearquadratic optimization. New York : M. Dekker, 1996.
 [34] M. Engin, L. Wang, L. Zhou, and X. Liu, “Deepkspd: Learning kernelmatrixbased spd representation for finegrained image recognition,” in Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: Springer International Publishing, 2018, pp. 629–645.
 [35] J. Liu and J. Ye, “Moreauyosida regularization for grouped tree structure learning,” in Advances in Neural Information Processing Systems 23, J. D. Lafferty, C. K. I. Williams, J. ShaweTaylor, R. S. Zemel, and A. Culotta, Eds. Curran Associates, Inc., 2010, pp. 1459–1467. [Online]. Available: http://papers.nips.cc/paper/3931moreauyosidaregularizationforgroupedtreestructurelearning.pdf

[36]
M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,”
JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, vol. 68, pp. 49–67, 2006.  [37] I. Masi, F. Chang, J. Choi, S. Harel, J. Kim, K. Kim, J. Leksut, S. Rawls, Y. Wu, T. Hassner, W. AbdAlmageed, G. Medioni, L. Morency, P. Natarajan, and R. Nevatia, “Learning poseaware models for poseinvariant face recognition in the wild,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 379–393, Feb 2019.
 [38] A. CostaPazo, S. Bhattacharjee, E. VazquezFernandez, and S. Marcel, “The replaymobile face presentationattack database,” in Proceedings of the International Conference on Biometrics Special Interests Group (BioSIG), Sep. 2016.
 [39] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov 1998.
 [40] S. A. Nene, S. K. Nayar, and H. Murase, “Columbia object image library (coil100),” 1996. [Online]. Available: http://www1.cs.columbia.edu/CAVE/software/softlib/coil100.php
 [41] D. M. Tax and R. P. Duin, “Support vector data description,” Machine Learning, vol. 54, no. 1, pp. 45–66, Jan 2004. [Online]. Available: https://doi.org/10.1023/B:MACH.0000008084.60811.49
 [42] Y. Ha, “Fast multioutput relevance vector regression,” CoRR, vol. abs/1704.05041, 2017. [Online]. Available: http://arxiv.org/abs/1704.05041
Comments
There are no comments yet.