I Introduction
In most engineering applications, we need to solve a system of linear equations. One of the popular approaches to solve a system of linear equations is based on leastsquares. An attractive way of solving system of equations is by iterative algorithms [1] due to computational simplicity and robustness of the solution (as compared to their batch counterparts).
Actually, iterative algorithms are ubiquitous in most of the machine learning algorithms. In this paper we pick up certain widely used iterative algorithms for our study: 1) Backpropagation Algorithm of neural Networks, 2)Least Mean Square Algorithm, 3)Kernel Least Mean Square Algorithm and 4)Dantzig Selector. One of the many ways of nonlinear parameter estimation is Neural Networks (NN)
[2, 3]. NN structure can consist of just one neuron, in which case it is nothing but the widely known least mean square algorithm (not considering the activation function)
[3]. In case there are many layers, the weights are adapted using the Backpropagation algorithm [3]. Apart from neural networks, there is another way of “implicit” parameter estimation. These kernel parameter estimation techniques [2, 3, 4] transform the data in a linearly nonseparable indigenous space to infinite dimensional kernel space where it can be linearly separated. The beauty of kernel techniques is that we are saved from taking inner products in higher dimensional spaces by what we know as the “kernel trick” [4], hence avoiding the curse of dimensionality. Primarily these kernel techniques were applied in nonlinear Support Vector Machines. Recently there has been much interest in “kernelizing” other algorithms in adaptive filter theory. For example, the Recursive Least Squares (RLS)
[5] now has a kernelized namesake called Kernel Recursive Least Squares (KRLS) [6]. Also, the well known Least Mean Square(LMS) [5] algortihm has an analog named the Kernel Least Mean Square(KLMS) [4] algorithm. All the above are examples of iterative algorithms that find profound usage in everyday life. For example, one can hardly think of a channelequaliser which does not use one of the above algorithms at its root [4, 5]. Adaptivedenoising [7], channel estimation [8] and adaptive beamforming in smart antennas [9] are areas in which these iterative algorithms are used. Sometimes when the number of equations is less than the number of unknowns (i.e. the problem is illposed), we need to use what is called regularization [10]. Examples include Tikhonov regularization (for norm), and Least Absolute Shrinkage and Selection Operator (LASSO) for norm. The difference between the above two approaches lie in the assumption of regularization prior pdfs. For Tikhonov regularization the pdf is Gaussian while for LASSO the pdf is Laplacian.One of the problems with these regularization techniques is increase in computational complexity of the optimization problem due to the regularization term. We tackle this problem by introducing a multiplication factor/variable stepsize factor for the error noise motivated from a result in functional analysis. This achieves the same effect as LASSO/ridge regression. This factor would result in a lower cost function(both in the
and norm sense) at convergence as compared to parent iterative algorithm without the factor.Rest of the paper is organised as follows. Section II gives a review of the LMS algorithm, the KLMS algorithm and the Backpropagation algorithm in a unified way. Also, it gives details about a study theme which involves minimizing LASSO and Dantzig selector problems via subgradients. Section III describes our approach briefly and gives the corresponding contraction principle based counterparts of the algorithms. Section IV gives results which validate our claims. Section V concludes the paper.
Ii Examples of Some Common Iterative Algorithms in Machine Learning and the LASSO
Iterative parameter estimation involves a parameter, say w (please observe that we define it in an abstract manner. It can be a vector or a matrix depending on the context) , which is estimated by the following rule:
(1) 
In Eqn. 1, is the iteration number and is the adaptation noise. Let us consider a data set where is the data set size. The corresponding labels are . Now, we present our three objects of study in this paper.
Ii1 The LMS Algorithm [5]
In this case,
(2) 
Here denotes the step size. This is what is commonly known as the WidrowHopf adaptation rule.
Ii2 The KLMS Algorithm
Invoking Eqn. 1 in a different spirit, we may write it as follows,
(3) 
Ignoring initial conditions, further decomposition would yield,
(4) 
Hence,
(5) 
This algorithm in [4] invokes the kernel trick. These algorithms are meant for particularly linearly nonseparable datasets. If we take the kernel inner product of the observation with the hypothesis w, we get the output . If this inner product is a kernel inner product [4], we get the following equation from Eqn. 2,
(6) 
Here, we have the following expression for the error term
(7) 
Eqn. 6 can further be written in a recursive manner as follows,
(8) 
Here the kernel inner product operator is defined as follows: given two matrices and , the element belonging to the row and coloumn of the resultant matrix is as follows,
(9) 
where This is a free parameter and its values can be found analytically, by cross validation and is problem dependent or more precisely dependent on the spread of the data.
This makes for this case to be,
(10) 
Ii3 The Backpropagation Algorithm [2]
In this case (thanks to the terminology, we had kept the meanings of eqn. 1 abstract so that everything may fit in), is a set of neurons ordered by another index . We consider the w cascaded in any combination and denotes the layer number which goes from till . Here is the final layer without loss of generality.
Neural Network training consists of two passes. In the first pass, the outputs are calculated. After that, the are estimated by a recursive algorithm called the backpropagation algorithm.
The technique of estimation of is given in [11]. We repeat the derivation given there for ease of the reader.

Forward Pass  Calculate outputs for all neurons by sending the data through the network. Let the activations/ outputs be indexed by .

Output Node  Update by the gradient of the cost function with respect to the output weights, i.e., .

Backpropagation  . Here is the Hadamard product and is an activation function.

Concurrently update .
Ii4 LASSO and Dantzig Selector
LASSO [12] is a popular example of regularization, in which the cost function to be optimized is of the form,
(11) 
in the above equation is a regularization parameter. Also w is the parameter to be estimated. t and X are the targets and the data values respectively. One of the elegant ways to implement this algorithm is by using Interior point methods [13] and subgradients [1].
Similarly the cost function for Dantzig selector is given by,
(12) 
The directions for descent for these nonsmooth cost functions are given by their subgradients [1] as follows,
(13) 
and,
(14) 
A noteworthy comment is as follows: as the subgradient of the norm of the error is the convex hull of the individual gradients, we may choose any one of them as a valid subgradient. But instead of choosing randomly, we may choose according to some criterion like the direction with maximum cost (and hence needs to be penalized) etc as indicated as subscript in eq. 14.
Finally, we would iterate by,
(15) 
or,
(16) 
Ii5 Recursive Least Squares (RLS)
The steps of the RLS recursion given in [5] for the given forgetting factor are enumerated as follows,
Here is an estimate of the inverse of the autocorrelation matrix at time .
Iii Proposed Algorithm
According to the contraction mapping theorem presented in [14], if is a contraction on a Banach Space, and we are dealing with a recursion of the form,
(17) 
then,
(18) 
where, is the fixed point of the iteration.
In our case and,
(19) 
This is true as with every iteration, the weights move closer and closer towards its equilibrium point(s) (which hopefully is unique depending on the algorithm, or all equilibrium points are almost equally preferable to us).
We assume infinitesimal adaptation noise which is due to the step size which is generally chosen to be small. This assumption is sometimes invoked to analyze iterative algorithms as in [5]. With this assumption,
(20) 
Hence the residual may be attributed to u. Hence by the tight inequality , we can assign .
Hence our result becomes for LMS and NN approaches,
(21) 
For the KLMS approach, the result is,
(22) 
We can observe the tendency for the deviation to increase as goes near . Also, there is a tendency to diverge if . Hence our proposal is to use a correcting factor multiplied to the error term.
This correcting factor has the unwanted in the numerator and in the denominator. This gives a modified WidrowHoff paradigm as follows.
(23) 
Similarly, a modified Backpropagation Algorithm would be given by the following equation.
(24) 
In the above Eqn. 24, denotes the weight matrix of the layer and the is found by the backpropagation algorithm in [11].
Also, a modified KLMS algorithm would be given by,
(25) 
Contraction principle based LASSO and Dantzig selector problems may be handled by the following equations.
(26) 
(27) 
Similarly, the modified RLS will be given as follows,
Iiia Modeling the Problem in an Operator Theoretic Perspective
In this section, we give step by step details of our problem formulation based on the Contraction principle [14] for the ease of reader. Let us assume a dynamical system which evolves as follows,
(28) 
Let us assume that is an equilibrium point. If that is the case, and considering (due to assumption of it being an equilibrium point), we subtract from both sides of the equation.
(29) 
or,
(30) 
We define a new variable . Hence Eqn. 30 becomes,
(31) 
Thus,
(32) 
(33) 
(34) 
and so on till,
(35) 
also,
(36) 
Let us assume of large enough the algorithm reaches a fixed point such that . Hence,
(37) 
This will be upper bounded by,
(38) 
Then by definition of ,
(39) 
Assuming convergence at iteration we get Eqn. 18.
IiiB Dependence on the Indicator Variable  Modification by Normalization with Dual Norm of the Data
In a classification problem, generally we would desire,
(40) 
depending on which class they belong. Here the inner product can be a linear or kernel (in which case it is implicit). Hence by Holder’s inequality,
(41) 
This gives,
(42) 
In the situation that , we must divide the value of by (to follow definition of a norm given in [14]). In all previous sections, the derivations were assuming x to be in the unit circle.
Hence our norm for the is,
(43) 
Equivalently, for the kernel case, our factor would be . Please note that we don’t need inner products and Holder’s inequality to justify our cause of normalization. Such arguments are valid only when w is a vector. However, when w is a matrix, we need the output of the operator to be bounded within the unit circle. Hence, the same normalization factor may be justified by the duality of the and norms.
IiiC Relationship Between StepSize and Upper Bound on L1 norm
Let us assume that the norm is a small number at convergence. The stepsize is inversely proportional to what is called the “timeconstant” [5] (a measure of speed of convergence of the algorithm). Hence, if we want convergence in less than or equal to iterations and some desired floor, we give the following (conflicting) design equations,
(44) 
(45) 
Here and are given in [5].
IiiD Equivalence to LASSO
From Eqn. 21,
(46) 
By triangular inequality from linear algebra,
(47) 
As is independent of w analytically, hence minimizing the upper bound would result in minimizing the upper bound for the norm of the weights. Hence our algorithm, achieves a huge role in minimizing the upper bound(i.e. the L1 ball) in which the weights lie.
IiiE Useful Properties I Concavity in w within the unit circle
Let us assume two candidate weights and .
From convexity of any norm,
(48) 
Hence as norm is positive definite (it is semidefinite but the weights of an iterative algorithm are seldom 0),
(49) 
Subtracting 1 from both sides,
(50) 
Please note that if the parent adaptation is convex and stable, each iteration would be a contraction towards the optimal weight. Hence the norm of the (normalized) weights (see Section. IIIB) should be less than or equal to unity always. Thus this proposed factor is always positive definite in normal cases.
IiiF Conclusion From the Above Analysis
This shows that in effect, we get a curve similar to the one shown in Figure. 1. This does not come as a surprise; such curves have been reported in the literature [15]. However, the beauty of our approach is that our step size is adjusted in such a manner that the LASSO cost function is minimized and hence has a nice “regularization” aspect to it. These good properties also help us in selecting the most optimal equilibrium point (in a leastL1 norm sense) when there are many.
Iv Results
Iva Experimental Setup
In Figure 2, a BPSK constellation was generated and passed through a channel [0.3,0.8]. After that Additive White Gaussian Noise (AWGN) was added. This data was input to the LMS and the modified LMS algorithm.
In Figure 3, the same BPSK constellation was passed through the same 2tap channel . After that it was passed through a nonlinearity . After that noise was added.
In Figure 5, the 2tap channel was the same one that was used in [4]. The coefficients are and the used was . The same nonlinearity was applied.
For the figure in Figure. 4, the 2tap channel was changed to and the nonlinearity was changed to , just for the sake for variety. Here we increase the intersymbol interference and reduce the nonlinearity.
IvB Observations
From Figure 2, we observe that the proposed algorithm converges faster to the same testing MSE than the original LMS algorithm. This is a manifestation of the superior performance of the modified error term.
From Figure 3, we again see a significantly faster convergence rate. Also, the test error reaches a lower floor for our proposed algorithm than the original KLMS algorithm.
From Figure 4
, we see that again the training and testing cost function values are much lower for our proposal. Also, we can see that our proposal is more robust. Also, the original NeuralNetwork configuration goes from one local minima in the testing costfunction to another during testing. However our scaling factor maintains its value in a steady manner. We understand that its value should increase after some epochs. But till 200 epochs we did not get any increase in the values which is an example of its robustness.
From Figure 5, we see that the training and testing cost function values are lower for our proposal. We see an increase in the testing MSE as a function of epochs in this case. However, it does not increase by more than in 150 epochs and stays well below the original NN curve.
From Figure 7
, we find that the contraction principle based stochastic subgradient LASSO is converging faster than the original stochastic subgradient LASSO. We used synthetic data for this particular simulation by generating random matrices from a uniform distribution and fitting a regression between them.
From Figure 6, we find that our contraction principle based RLS variant outdoes the conventional RLS algorithm. A Binary Phase Shift Keying(BPSK) constellation is randomly generated and passed through an FIR filter with coefficients . Consequently, 20dB white Gaussian noise is added. With this dataset, we compared the conventional RLS and our RLS variant with the LASSO objective as the benchmark.
It is also worth mentioning that the above curves are not instantaneous curves; they have been obtained by averaging over at least 25 monte carlo simulations.
V Conclusion
A number of common iterative algorithms have been evaluated and compared with their newly proposed contraction principle based variants. In all the scenarios we get a performance boost after applying our modification to the parent algorithms. Also, we show a relation between our approach and the LASSO. These results, which show a lower testing error on all occasions certify the superiority of our approach.
References
 [1] R. Rockfellar, Convex Analysis. Princeton Mathematical Series, 1970.
 [2] C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
 [3] S. Haykin, Neural Networks and Learning Machines. PHI, 2010.
 [4] W. Liu, P. P. Pokharel, and J. C. Principe, “The Kernel LeastMeanSquare Algorithm,” IEEE Transactions on Signal Processing, vol. 56, no. 2, pp. 543–554, Feb. 2008. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4410463
 [5] A. H. Sayed, Adaptive Filters. John Wiley, 2008.
 [6] Y. Engel, S. Mannor, and R. Meir, “The Kernel Recursive Least Squares Algorithm, Technical Report ICNC03001,” 2003.
 [7] S. Sriram, S. Nitin, K. Prabhu, and M. Bastiaans, “Signal denoising techniques for partial discharge measurements,” IEEE Transactions on Dielectrics and Electrical Insulation, pp. 1182–1191, Dec. 2005. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1561798
 [8] S. Coleri, M. Ergen, a. Puri, and a. Bahai, “Channel estimation techniques based on pilot arrangement in OFDM systems,” IEEE Transactions on Broadcasting, vol. 48, no. 3, pp. 223–229, Sep. 2002. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1033876
 [9] C. C. Gaudes, S. Member, I. Santamaría, S. Member, J. Vía, E. M. Gómez, and T. S. Paules, “Robust Array Beamforming With Sidelobe Control Using Support Vector Machines,” vol. 55, no. 2, pp. 574–584, 2007.
 [10] [Online]. Available: http://en.wikipedia.org/wiki/Regularization_(mathematics)
 [11] [Online]. Available: http://ufldl.stanford.edu/wiki/index.php/Deriving_gradients_using_the_backpropagation_idea
 [12] R. Tibshirani, “Regression shrinkage and selection via the lasso: a retrospective,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 73, no. 3, pp. 273–282, Jun. 2011. [Online]. Available: http://doi.wiley.com/10.1111/j.14679868.2011.00771.x
 [13] S. Boyd and L. Vandenberghe, Convex optimization, 2004. [Online]. Available: http://books.google.com/books?hl=en&lr=&id=mYm0bLd3fcoC&oi=fnd&pg=PR11&dq=Convex+Optimization&ots=tc7RxKKDH1&sig=SP3WjwBMZpLjqQm3iEm9c5HH4
 [14] G. T. W. and R. E. Greene, ”Introduction to Topology”. Dover Publications, 1999.
 [15] Z. Liyi, C. Lei, and S. Yunshan, “Variable Stepsize CMA Blind Equalization based on Nonlinear Function of Error Signal,” 2009 WRI International Conference on Communications and Mobile Computing, vol. 2, no. 4, pp. 396–399, Jan. 2009. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4797026
Comments
There are no comments yet.