# On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learning methods based on sparsity regularization. In this paper, we discuss a general class of such methods, in which the regularizer can be expressed as the composition of a convex function ω with a linear function. This setting includes several methods such the group Lasso, the Fused Lasso, multi-task learning and many more. We present a general approach for solving regularization problems of this kind, under the assumption that the proximity operator of the function ω is available. Furthermore, we comment on the application of this approach to support vector machines, a technique pioneered by the groundbreaking work of Vladimir Vapnik.

## Authors

• 6 publications
• 7 publications
• 3 publications
• 62 publications
04/07/2011

### Efficient First Order Methods for Linear Composite Regularizers

A wide class of regularization problems in machine learning and statisti...
09/26/2013

### High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning

Joint sparsity regularization in multi-task learning has attracted much ...
03/05/2013

### An Equivalence between the Lasso and Support Vector Machines

We investigate the relation of two fundamental tools in machine learning...
10/05/2012

### Bayesian Inference with Posterior Regularization and applications to Infinite Latent SVMs

Existing Bayesian models, especially nonparametric Bayesian methods, rel...
01/26/2017

### Linear convergence of SDCA in statistical estimation

In this paper, we consider stochastic dual coordinate (SDCA) without st...
10/18/2013

### A novel sparsity and clustering regularization

We propose a novel SPARsity and Clustering (SPARC) regularizer, which is...
10/04/2018

### Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems

Consider the following class of learning schemes: β̂ := β∈C ∑_j=1^n ℓ(x_...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we address supervised learning methods which are based on the optimization problem

 minx∈Rd{f(x)+g(x)}, (1)

where the function measures the fit of a vector (linear predictor) to available training data and is a penalty term or regularizer which encourages certain types of solutions. Specifically, we let , where is an error function, is a vector of measurements and a matrix, whose rows are the input vectors. This class of regularization methods arise in machine learning, signal processing and statistics and have a wide range of applications.

Different choices of the error function and the penalty function correspond to specific techniques. In this paper, we are interested in solving problem (1) when is a strongly smooth convex function (such as the square error ) and the penalty function

is obtained as the composition of a “simple” function with a linear transformation

, that is,

 g(x)=ω(Bx), (2)

where is a prescribed matrix and is a nondifferentiable convex function on . The class of regularizers (2) includes a variety of methods, depending on the choice of the function and of matrix . Our motivation for studying this class of penalty functions arises from sparsity-inducing regularization methods which consider to be either the norm or a mixed - norm. When

is the identity matrix and

, the latter case corresponds to the well-known Group Lasso method yuan , for which well studied optimization techniques are available. Other choices of the matrix give rise to different kinds of Group Lasso with overlapping groups Jenatton ; binyu , which have proved to be effective in modeling structured sparse regression problems. Further examples can be obtained by considering composition with the norm, for example this includes the Fused Lasso penalty function tib05 and the graph prediction problem of mark09 .

A common approach to solve many optimization problems of the general form (1) is via proximal-gradient methods. These are first-order iterative methods, whose computational cost per iteration is comparable to gradient descent. In some problems in which has a simple expression, proximal-gradient methods can be combined with acceleration techniques Nesterov83 ; Nesterov07 ; tseng10 , to yield significant gains in the number of iterations required to reach a certain approximation accuracy of the minimal value. The essential step of proximal-gradient methods requires the computation of the proximity operator of function , see Definition 1 below. In certain cases of practical importance, this operator admits a closed form, which makes proximal-gradient methods appealing to use. However, in the general case (2) the proximity operator may not be easily computable.

We describe a general technique to compute the proximity operator of the composite regularizer (2) from the solution of a fixed point problem, which depends on the proximity operator of the function and the matrix . This problem can be solved by a simple and efficient iterative scheme when the proximity operator of has a closed form or can be computed in a finite number of steps. When is a strongly smooth function, the above result can be used together with Nesterov’s accelerated method Nesterov83 ; Nesterov07 to provide an efficient first-order method for solving the optimization problem (1).

The paper is organized as follows. In Section 2, we review the notion of proximity operator, useful facts from fixed point theory and present a convergent algorithm for the solution of problem (1) when is quadratic function and then an algorithm to solve the associated optimization problem (1). In Section 3, we discuss some examples of composite functions of the form (2) which are valuable in applications. In Section 4 we apply our observations to support vector machines and obtained new algorithms for the solution of this problem. Finally, Section 5 contains concluding remarks.

## 2 Fixed Point Algorithms Based on Proximity Operators

In this section, we present an optimization approach which use fixed point algorithms for nonsmooth problems of the form (1) under the assumption (2). We first recall some notation and then move on to present an approach to compute the proximity operator for composite regularizers.

### 2.1 Notation and Problem Formulation

We denote by the Euclidean inner product on and let be the induced norm. If , for every we denote by the vector . For every , we define the norm of as .

As the basic building block of our method, we consider the optimization problem (1) in the special case when is a quadratic function and the regularization term is obtained by the composition of a convex function with a linear function. That is, we consider the problem

 min{12y⊤Qy−x⊤y+ω(By):y∈Rd}. (3)

where is a given vector in and a positive definite matrix. The development of a convergent method for the solution of this problem requires the well-known concepts of proximity operator and subdifferential of a convex function. Let us now review some of salient features of these important notions which are needed for the analysis of problem (3).

The proximity operator on a Hilbert space was introduced by Moreau in moreau62 .

###### Definition 1

Let be a real valued convex function on . The proximity operator of is defined, for every by

 proxω(x):=argmin{12∥y−x∥22+ω(y):y∈Rd}. (4)

The proximity operator is well defined, because the above minimum exists and is unique.

Recall that the subdifferential of at is defined as . The subdifferential is a nonempty compact and convex set. Moreover, if is differentiable at then its subdifferential at consists only of the gradient of at .

The relationship between the proximity operator and the subdifferential of are essential for algorithmic developments for the solution of (3), andy-tech ; combettes ; MSX ; mosci10 . Generally the proximity operator is difficult to compute since it is expressed as the minimum of a convex optimisation problem. However, the are some rare circumstances where it can obtained explicitly, for examples when is a multiple of the norm of the proximity operator relates to soft thresholding and moreover a related formula allows for the explicit identification of the proximity operator for the norm, see, for example, andy-tech ; combettes ; MSX . Our optimisation problem (3) can be reduced to the identification of the proximity operator for the composition function . Although the prox of may be readily available, it may still be a computational challenge to obtain the prox of . We consider this essential issue in the next section.

### 2.2 Computation of a Generalized Proximity Operator with a Fixed Point Method

In this section we consider circumstances in which the proximity operator of can be explicitly computed in a finite number of steps and seek an algorithm for the solution of the optimisation problem (3).

As we shall see, the method proposed here applies for any positive definite matrix . This will allow us in a future publication to provide a second order method for solving (1

). For the moment, we are content in focusing on (

3) by providing a technique for the evaluation of .

First, we observe that the minimizer of (3) exists and is unique. Indeed, this vector is characterised by the set inclusion

 Q^y∈x−B⊤∂ω(B^y). (5)

To make use of this observation, we introduce the affine transformation defined, for fixed , , at by

 Az:=(I−λBQ−1B⊤)z+BQ−1x (6)

and the nonlinear operator

 H:=(I−proxωλ)∘A. (7)

The next theorem from andy-tech is a natural extension of an observation in MSX , which only applies to the case .

###### Theorem 2.1

If is a convex function on , , , is a positive number, the operator is defined as in (7), and is the minimizer of (3) then

 ^y=Q−1(x−λB⊤v) (8)

if and only if is a fixed point of .

This theorem provides us with a practical tool to solve problem (3) numerically by using Picard iteration relative to the nonlinear mapping . Under an additional hypothesis on the matrix , the mapping is non-expansive, see andy-tech . Therefore, Opial’s Theorem zalinescu allows us to conclude that the Picard iterate converges to the solution of (3), see andy-tech ; MSX for a discussion of this issue. Furthermore, under additional hypotheses the mapping is a contraction. In that case, the Picard iterate converges linearly.

We may extend the range of applicability of our observations and provide a fixed point proximal-gradient method for solving problem (1) when the regularizer has the form (2) and the error is a strongly smooth convex function, that is, the gradient of , denote by , is Lipschitz continuous with constant . So far, the convergence of this extension has yet to be analyzed. The idea behind proximal-gradient methods, see combettes ; Nesterov07 ; tseng10

and references therein, is to update the current estimate of the solution

using the proximity operator of and the gradient of . This is equivalent to replacing with its linear approximation around a point which is a function of the previous iterates of the algorithm. The simplest instance of this iterative algorithm is given in Algorithm 1 1. Extensions to acceleration schemes are described in andy-tech .

### 2.3 Connection to the forward-backward algorithm

In this section, we consider the special case and interpret the Picard iteration of in terms of a forward-backward algorithm in the dual, for a discussion of the forward-backward algorithm, see for example combettes

The Picard iteration is defined as

 vt+1←(I−proxωλ)((I−λBB⊤)vt+Bx) (9)

We first recall the Moreau decomposition, see, for example, combettes and references therein, which relates the proximity operators of a lower semicontinuous convex function and its conjugate,

 I=proxφ+proxφ∗. (10)

Using equation (10), the iterative step (9) becomes

 vt+1←prox(ωλ)∗(vt−(λBB⊤vt−Bx)) (11)

which is a forward-backward method. We can further simplify this iteration by introducing the vector and obtaining the iterative algorithm

 zt+1←λprox(ωλ)∗(1λzt−(BB⊤zt−Bx)). (12)

 1λproxλg∘λI=prox1λg∘λI (13)

and

 (ωλ)∗=1λω∗∘λI (14)

see, for example, borwein , we obtain the equivalent forward-backward iteration

 zt+1←proxλω∗(zt−(λBB⊤zt−λBx)). (15)

This method is a forward-backward method of the type considered in (pesquet, , Alg. 10.3) and solves the minimization problem

 min{12∥B⊤z−x∥2+ω∗(z):z∈Rm}. (16)

This minimization problem in turn can be viewed as the dual of the primal problem

 min{12∥u∥2−⟨x,u⟩+ω(Bu):u∈Rd} (17)

by using Fenchel’s duality theorem, see, for example, borwein . Moreover, the primal and dual solutions are related through the conditions and , the first of which implies that equals the solution of the proximity problem (17), that is, equals .

## 3 Examples of Composite Functions

In this section, we provide some examples of penalty functions which have appeared in the literature that fall within the class of linear composite functions (2).

We define for every , and , the restriction of the vector to the index set as . Our first example considers the Group Lasso penalty function, which is defined as

 ωGL(x)=k∑ℓ=1∥x|Jℓ∥2, (18)

where are prescribed subsets of (also called the “groups”) such that . The standard Group Lasso penalty, see, for example, yuan , corresponds to the case that the collection of groups forms a partition of the index set , that is, the groups do not overlap. In this case, the optimization problem (4) for decomposes as the sum of separate problems and the proximity operator is readily obtained by using the proximity operator of the -norm to each group separately. In many cases of interest, however, the groups overlap and the proximity operator cannot be easily computed.

Note that the function (18) is of the form (2). We let , and define, for every , , where, for every we let . Moreover, we choose , where is a matrix defined as

 (Bℓ)ij={1if~{}j=Jℓ[i]0otherwise,

where for every and , we denote by the -th largest integer in .

The second example concerns the Fused Lasso tib05 , which considers the penalty function . This function falls into the class (2). Indeed, if we choose to be the norm and the first order divided difference matrix

 B=⎡⎢ ⎢⎣1−10……01−10…⋮⋱⋱⋱⋱⎤⎥ ⎥⎦ (19)

we get back . The intuition behind the Fused Lasso is that it favors vectors which do not vary much across contiguous components. Further extensions of this case may be obtained by choosing to be the incidence matrix of a graph, leading to the penalty . This is a setting which is relevant, for example, in online learning over graphs mark09 ; HP07 .

The next example considers composition with orthogonally invariant (OI) norms. Specifically, we choose a symmetric gauge function , that is, a norm , which is both absolute and invariant under permutations von-neumann and define the function , at by the formula , where ,

is the vector formed by the singular values of matrix

, in non-increasing order. An example of OI-norm are Schatten -norms, which correspond to the case that is the -norm. The next proposition provides a formula for the proximity operator of an OI-norm. A proof can be found in andy-tech .

###### Proposition 1

With the above notation, it holds that

 proxh∘σ(X)=Udiag(proxh(σ(X)))V⊤

where and and are the matrices formed by the left and right singular vectors of , respectively.

We can compose an OI-norm with a linear transformation , this time between two spaces of matrices, obtaining yet another subclass of penalty functions of the form (2). This setting is relevant in the context of multi-task learning. For example, in AEP is chosen to be the trace or nuclear norm and a specific linear transformation which models task relatedness is considered. Specifically, the regulariser is given by , where is the vector all of whose components are equal to one.

## 4 Application to Support Vector Machines

In this section, we turn our attention to the important topic of support vector machines (SVMs), which are widely used in data analysis. SVMs were pioneered by the fundamental work of Vapnik Boser ; CV ; vapnik and inspired one of us to begin research in machine learning EPP ; PV ; PPP . For that we are all very grateful to Vladimir Vapnik for his fundamental contributions to machine learning.

First, we recall the SVM primal and dual optimization problems, vapnik . To simplify the presentation we only consider the linear version of SVMs. A similar treatment using feature map representations is straightforward and so will not be discussed here, although this in a an important extension of practical value. Moreover, we only consider SVMs for classification, but our approach can be applied to SVM regression and other variants of SVMs which have appeared in the literature.

The optimisation problem of concern here is given by

 min{Cm∑i=1V(yiw⊤xi)+12∥w∥2:w∈Rd} (20)

where , , is the hinge loss and is a positive parameter balancing empirical error against margin maximization. We let be the input data and be the class labels.

Problem (20) can be viewed as a proximity operator computation of the form (3), with , , and . The proximity operator of the hinge loss is separable across the coordinates and simple to compute. In fact, for any and it is given by the formula

 proxμV(ζ)=min(ζ+μ,max(ζ,1)). (21)

Hence, we can solve problem (20) by Picard iteration, namely

 vt+1←(I−proxωλ)((I−λBB⊤)vt) (22)

with satisfying , which ensures that the nonlinear mapping is strictly contractive. Note that and that this iterative scheme may be interpreted as acting on the SVM dual, see Section 2.3. In fact, there is a simple relation to the support vector coefficients given by the equation . Consequently, this algorithmic approach is well suited when the sample size is small compared to the dimensionality . An estimate of the primal solution, if required, can be obtained by using the formula . Also, when the last equation, relating and , cannot be inverted. Hence, (22) is not useful in this case.

Recall that the dual problem of (20) is given vapnik

 min{12∥B⊤α∥2−1⊤α:α∈[0,C]m}. (23)

This problem can be seen as the computation of a generalized proximity operator of the type (3). To explain what we have in mind we use the notation as the elementwise product between matrices of the same size (Schur product) and introduce the kernel matrix .

Using this terminology, we conclude that problem (23) is of the form (3) with , (the vector of all ones), and , where if and otherwise. Furthermore, the proximity operator for is given by the projection on the set , that is . These observations yield the Picard iteration

 vt+1←(I−proxωC)((I−λ(K−1⊙yy⊤))vt+(K−1⊙yy⊤)1) (24)

with . This iterative scheme requires that the kernel matrix is invertible, which is frequently the case, for example, in the case of Gaussian kernels. Another requirement is that either has to be precomputed or a linear system involving has to be solved at every iteration, which limits the scalability of this scheme to very large samples. In contrast, the iteration (22) can always be applied, even when is not invertible. In fact, when , and equivalently , is invertible then both iterative methods (22), (24) converge linearly at a rate which depends on the condition number of , see andy-tech ; MSX .

Recall that algorithm (22) is equivalent to a forward-backward method in the dual, see Section 2.3. Thus, an accelerated variant akin to Nesterov’s optimal method and FISTA fista could also be used. However, in the case of an invertible kernel matrix, both versions converge linearly Nesterov07 and hence it is not clear whether there is any practical advantage from the Nesterov update. Furthermore, algorithm (24) could also be modified in a similar way.

On the other hand, if , we would directly attempt to solve the primal problem. In this case, the Nesterov smoothing method can be employed, nesterov2005smooth . An advantage of such a method is that it only stores variables, even though it needs computations per iteration. The method described above, based on Picard iteration, requires cost per iteration and stores variables.

Let us finally remark that iterative methods similar to (22) or (24) can be applied to

regularization problems, other than SVMs, provided that the proximity operator of the corresponding loss function is available. Common choices for the loss function, other than the hinge loss, are the logistic and square loss functions leading to logistic regression and least squares regression, respectively. In particular, in these two cases, the primal objective (

20) is both smooth and strongly convex and hence a linearly convergent gradient descent or accelerated gradient descent method can be used nesterov_book , regardless of the conditioning of the kernel matrix.

## 5 Conclusion

We presented a general approach to solve a class of nonsmooth optimization problems, whose objective function is given by the sum of a smooth term and a nonsmooth term which is obtained by linear function composition. The prototypical example covered by this setting is a linear regression regularization method, in which the smooth term is an error term and the nonsmooth term is a regularizer which favors certain desired parameter vectors. An important feature of our approach is that it can deal with a rich class of regularizers and, as shown numerically in

andy-tech , is competitive with the state of the art methods. Using these ideas, we also provided a fixed-point scheme to solve support vector machines. Although numerical experiments have yet to be done, we believe this method is simple enough to deserve attention by practitioners.

We believe that the method presented here should be throughly investigated both in terms of convergence analysis, where ideas presented in villa may be valuable, and numerical performance with other methods, such as alternate direction of multipliers, see, for example, Boyd , block coordinate descent, alternate minimization and others. Finally, there are several other machine learning problems where ideas presented here apply. For example, in that regard we mention multiple kernel learning, see for example, MP07 ; mkl ; mkl2 ; suzuki and references therein, some structured sparsity regularizers MauPon ; MMP and multi-task learning, see, for example AEP ; CCG ; EPT . We leave these tantalizing issues for future investigation.

#### Acknowledgements

Part of this work was supported by EPSRC Grant EP/H027203/1, Royal Society International Joint Project Grant 2012/R2 and by the European Union Seventh Framework Programme (FP7 2007-2013) under grant agreement No. 246556.

## Bibliography

• (1) Argyriou, A., Evgeniou, T., and Pontil, M. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
• (2) Argyriou, A, Micchelli, C.A., Pontil, P. Shen, L., and Xu, Y. Efficient first order methods for linear composite regularizers, arXiv:1104.1436, 2011.
• (3) Beck, A. and Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183–202, 2009b.
• (4) Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3(1):1–122, 2011.
• (5) Borwein, J. M. and Lewis, A. S. Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, 2005.
• (6)

Boser, B..E., Guyon, I.M., and Vapnik, V.N. A training algorithm for optimal margin classifiers. Proc. 5th Annual ACM Workshop on Computational Learning Theory, pages 144 152, 1992.

• (7) Cavallanti, G., Cesa-Bianchi, N., Gentile, C. Linear algorithms for online multitask classification J. Machine Learning Research, 11:2901–2934, 2010.
• (8) Combettes, P.L. and Pesquet, J.-C. Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (Bauschke, H.H. et al. Editors), pp. 185–212. Springer, 2011.
• (9) Combettes, P.L. and Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168–1200, 2006.
• (10) Cortes, C. and Vapnik, V.N. Support-Vector Networks. Machine Learning, 20, 1995.
• (11) Evgeniou, Pontil, M., Poggio, T. Advances in Computational Mathematics, 13(1):1-50, 2000.
• (12) Evgeniou, Pontil, M., Toubia, O. A convex optimization approach to modeling heterogeneity in conjoint estimation. Marketing Science, 26:805–818, 2007.
• (13) Herbster, M. and Lever, G.

Predicting the labelling of a graph via minimum p-seminorm interpolation.

In Proceedings of the 22nd Conference on Learning Theory (COLT), 2009.
• (14)

Herbster, M. and Pontil, M. Prediction on a graph with the perceptron. Advances in Neural Information Processing Systems 19, pages 577–584, MIT Press, 2007.

• (15) Jenatton, R., Audibert, J.-Y., and Bach, F. Structured variable selection with sparsity-inducing norms. arXiv:0904.3523v2, 2009.
• (16) Maurer, A, and Pontil, M. Structured sparsity and generalization. J. Machine Learning Research, 13:671-690, 2012.
• (17) Micchelli, C.A., Morales, J.M., Pontil, M. A family of penalty functions for structured sparsity NIPS 2010.
• (18) Micchelli, C.A. and Pontil, M. Feature space perspectives for learning the kernel. Machine Learning, 66:297–319, 2007.
• (19) Micchelli, C.A., Shen, L., and Xu, Y. Proximity algorithms for image models: denoising. Inverse Problems, 27(4), 2011.
• (20) Moreau, J.J. Fonctions convexes duales et points proximaus dans un espace hilbertien. Acad. Sci. Paris Sér. A Math., 255:2897–2899, 1962.
• (21) Mosci, S., Rosasco, L., Santoro, M., Verri, A., and Villa, S. Solving Structured Sparsity Regularization with Proximal Methods. In Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, pp. 418–433, 2010.
• (22) Nesterov, Y. A method of solving a convex programming problem with convergence rate . Soviet Mathematics Doklady, 27(2):372–376, 1983.
• (23) Nesterov, Y. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127–152, 2005.
• (24) Nesterov, Y. Gradient methods for minimizing composite objective function. CORE, 2007.
• (25) Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2004.
• (26) Pontil, M., Rifkin, R.M., and Evgeniou, T. From regression to classification in support vector machines.

Proc. 7th European Symposium on Artificial Neural Networks

, pages 225–230, 1999.
• (27) Pontil, M. and Verri, A. Properties of support vector machines. Neural Computation, 10:955–974, 1998.
• (28) Rakotomamonjy, A. Bach, F., Canu, S, Grandvalet, Y. SimpleMKL. J. Machine Learning Research, 9:2491–2521, 2008.
• (29) Sonnenburg, S., Rätsch, G., Schäfer, C, Schölkopf, B. Large scale multiple kernel learning. J. Machine Learning Research, 7:1531–1565, 2006.
• (30) Suzuki, T. and Tomioka, R. SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels. Machine Learning, 85(1):77–108, 2011.
• (31) Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., and Knight, K. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2005.
• (32) Tseng, P. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 125(2):263–295, 2010.
• (33) Vapnik, V. (1999).

The Nature of Statistical Learning Theory

. Springer, 1999.
• (34) Villa, S., Salzo, S., Baldassarre, L., Verri, A. Accelerated and inexact forward-backward splitting. Optimization Online, August 2011.
• (35) Von Neumann, J. Some matrix-inequalities and metrization of matric-space. Mitt. Forsch.-Inst. Math. Mech. Univ. Tomsk, 1:286–299, 1937.
• (36) Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68(1):49–67, 2006.
• (37) Zhao, P., Rocha, G., and Yu, B. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A):3468–3497, 2009.
• (38) Zǎlinescu, C. Convex Analysis in General Vector Spaces. World Scientific, 2002.