# Contraction Principle based Robust Iterative Algorithms for Machine Learning

Iterative algorithms are ubiquitous in the field of data mining. Widely known examples of such algorithms are the least mean square algorithm, backpropagation algorithm of neural networks. Our contribution in this paper is an improvement upon this iterative algorithms in terms of their respective performance metrics and robustness. This improvement is achieved by a new scaling factor which is multiplied to the error term. Our analysis shows that in essence, we are minimizing the corresponding LASSO cost function, which is the reason of its increased robustness. We also give closed form expressions for the number of iterations for convergence and the MSE floor of the original cost function for a minimum targeted value of the L1 norm. As a concluding theme based on the stochastic subgradient algorithm, we give a comparison between the well known Dantzig selector and our algorithm based on contraction principle. By these simulations we attempt to show the optimality of our approach for any widely used parent iterative optimization problem.

## Authors

• 3 publications
• 15 publications
09/04/2015

### l1-norm Penalized Orthogonal Forward Regression

A l1-norm penalized orthogonal forward regression (l1-POFR) algorithm is...
06/13/2019

### Non-convex optimization via strongly convex majoirziation-minimization

In this paper, we introduce a class of nonsmooth nonconvex least square ...
12/16/2021

### A Closed-Form Bound on the Asymptotic Linear Convergence of Iterative Methods via Fixed Point Analysis

In many iterative optimization methods, fixed-point theory enables the a...
01/07/2020

### Efficient ML Direction of Arrival Estimation assuming Unknown Sensor Noise Powers

This paper presents an efficient method for computing maximum likelihood...
03/03/2020

### Separable MSE-based design of two-way multiple-relay cooperative MIMO networks

We consider the problem of jointly optimizing terminal precoder/decoders...
06/14/2017

### A New Adaptive Video SRR Algorithm With Improved Robustness to Innovations

In this paper, a new video super-resolution reconstruction (SRR) method ...
03/28/2018

### One-step dispatching policy improvement in multiple-server queueing systems with Poisson arrivals

Policy iteration techniques for multiple-server dispatching rely on the ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

In most engineering applications, we need to solve a system of linear equations. One of the popular approaches to solve a system of linear equations is based on least-squares. An attractive way of solving system of equations is by iterative algorithms [1] due to computational simplicity and robustness of the solution (as compared to their batch counterparts).

Actually, iterative algorithms are ubiquitous in most of the machine learning algorithms. In this paper we pick up certain widely used iterative algorithms for our study: 1) Backpropagation Algorithm of neural Networks, 2)Least Mean Square Algorithm, 3)Kernel Least Mean Square Algorithm and 4)Dantzig Selector. One of the many ways of non-linear parameter estimation is Neural Networks (NN)

[2, 3]

. NN structure can consist of just one neuron, in which case it is nothing but the widely known least mean square algorithm (not considering the activation function)

[3]. In case there are many layers, the weights are adapted using the Backpropagation algorithm [3]. Apart from neural networks, there is another way of “implicit” parameter estimation. These kernel parameter estimation techniques [2, 3, 4] transform the data in a linearly non-separable indigenous space to infinite dimensional kernel space where it can be linearly separated. The beauty of kernel techniques is that we are saved from taking inner products in higher dimensional spaces by what we know as the “kernel trick” [4]

, hence avoiding the curse of dimensionality. Primarily these kernel techniques were applied in nonlinear Support Vector Machines. Recently there has been much interest in “kernelizing” other algorithms in adaptive filter theory. For example, the Recursive Least Squares (RLS)

[5] now has a kernelized namesake called Kernel Recursive Least Squares (KRLS) [6]. Also, the well known Least Mean Square(LMS) [5] algortihm has an analog named the Kernel Least Mean Square(KLMS) [4] algorithm. All the above are examples of iterative algorithms that find profound usage in everyday life. For example, one can hardly think of a channel-equaliser which does not use one of the above algorithms at its root [4, 5]. Adaptive-denoising [7], channel estimation [8] and adaptive beamforming in smart antennas [9] are areas in which these iterative algorithms are used. Sometimes when the number of equations is less than the number of unknowns (i.e. the problem is ill-posed), we need to use what is called regularization [10]. Examples include Tikhonov regularization (for -norm), and Least Absolute Shrinkage and Selection Operator (LASSO) for norm. The difference between the above two approaches lie in the assumption of regularization prior pdfs. For Tikhonov regularization the pdf is Gaussian while for LASSO the pdf is Laplacian.

One of the problems with these regularization techniques is increase in computational complexity of the optimization problem due to the regularization term. We tackle this problem by introducing a multiplication factor/variable step-size factor for the error noise motivated from a result in functional analysis. This achieves the same effect as LASSO/ridge regression. This factor would result in a lower cost function(both in the

and norm sense) at convergence as compared to parent iterative algorithm without the factor.

Rest of the paper is organised as follows. Section II gives a review of the LMS algorithm, the KLMS algorithm and the Backpropagation algorithm in a unified way. Also, it gives details about a study theme which involves minimizing LASSO and Dantzig selector problems via subgradients. Section III describes our approach briefly and gives the corresponding contraction principle based counterparts of the algorithms. Section IV gives results which validate our claims. Section V concludes the paper.

## Ii Examples of Some Common Iterative Algorithms in Machine Learning and the LASSO

Iterative parameter estimation involves a parameter, say w (please observe that we define it in an abstract manner. It can be a vector or a matrix depending on the context) , which is estimated by the following rule:

 w(n+1)=w(n)+δ(n) (1)

In Eqn. 1, is the iteration number and is the adaptation noise. Let us consider a data set where is the data set size. The corresponding labels are . Now, we present our three objects of study in this paper.

#### Ii-1 The LMS Algorithm [5]

In this case,

 δ(n)=μ(tn−w(n)TXn)Xn (2)

Here denotes the step size. This is what is commonly known as the Widrow-Hopf adaptation rule.

#### Ii-2 The KLMS Algorithm

Invoking Eqn. 1 in a different spirit, we may write it as follows,

 w(n+1)=w(n−1)+δ(n)+δ(n−1) (3)

Ignoring initial conditions, further decomposition would yield,

 w(n+1)=Σni=1δ(i) (4)

Hence,

 ⟨w(n+1),Xn⟩=Σni=1⟨δ(i),Xn⟩ (5)

This algorithm in [4] invokes the kernel trick. These algorithms are meant for particularly linearly non-separable datasets. If we take the kernel inner product of the observation with the hypothesis w, we get the output . If this inner product is a kernel inner product [4], we get the following equation from Eqn. 2,

 y(n+1)=μΣn−1i=0e(i)⟨Xi,Xn⟩K (6)

Here, we have the following expression for the error term

 e(i)=ti−y(i),i=1,2...N. (7)

Eqn. 6 can further be written in a recursive manner as follows,

 y(n+1)=y(n)+μe(n)⟨Xn−1,Xn⟩K (8)

Here the kernel inner product operator is defined as follows: given two matrices and , the element belonging to the row and coloumn of the resultant matrix is as follows,

 Cij=exp(−∥Ai−Bj∥2σ) (9)

where This is a free parameter and its values can be found analytically, by cross validation and is problem dependent or more precisely dependent on the spread of the data.

This makes for this case to be,

 δ=μe(i)⟨Xn−1,Xn⟩K (10)

#### Ii-3 The Backpropagation Algorithm [2]

In this case (thanks to the terminology, we had kept the meanings of eqn. 1 abstract so that everything may fit in), is a set of neurons ordered by another index . We consider the w cascaded in any combination and denotes the layer number which goes from till . Here is the final layer without loss of generality.

Neural Network training consists of two passes. In the first pass, the outputs are calculated. After that, the are estimated by a recursive algorithm called the back-propagation algorithm.

The technique of estimation of is given in [11]. We repeat the derivation given there for ease of the reader.

• Forward Pass - Calculate outputs for all neurons by sending the data through the network. Let the activations/ outputs be indexed by .

• Output Node - Update by the gradient of the cost function with respect to the output weights, i.e., .

• Backpropagation - . Here is the Hadamard product and is an activation function.

• Concurrently update .

#### Ii-4 LASSO and Dantzig Selector

LASSO [12] is a popular example of regularization, in which the cost function to be optimized is of the form,

 Jlasso=∥t−Xw∥2+λ∥w∥1. (11)

in the above equation is a regularization parameter. Also w is the parameter to be estimated. t and X are the targets and the data values respectively. One of the elegant ways to implement this algorithm is by using Interior point methods [13] and subgradients [1].

Similarly the cost function for Dantzig selector is given by,

 JDantzig=∥t−Xw∥∞+λ∥w∥1. (12)

The directions for descent for these non-smooth cost functions are given by their subgradients [1] as follows,

 ∇Jlasso=(tn−XTnw)Xn+λsign(w) (13)

and,

 ∇JDantzig=(tmaxn−(XTnw)max)Xmaxn+λsign(w) (14)

A noteworthy comment is as follows: as the subgradient of the norm of the error is the convex hull of the individual gradients, we may choose any one of them as a valid subgradient. But instead of choosing randomly, we may choose according to some criterion like the direction with maximum cost (and hence needs to be penalized) etc as indicated as subscript in eq. 14.

Finally, we would iterate by,

 w(n+1)=w(n)+∇Jlasso (15)

or,

 w(n+1)=w(n)+∇JDantzig (16)

#### Ii-5 Recursive Least Squares (RLS)

The steps of the RLS recursion given in [5] for the given forgetting factor are enumerated as follows,

Here is an estimate of the inverse of the autocorrelation matrix at time .

## Iii Proposed Algorithm

According to the contraction mapping theorem presented in [14], if is a contraction on a Banach Space, and we are dealing with a recursion of the form,

 xn+1=T(xn)+u (17)

then,

 ∥x−x∗∥≤ |T∥m∥u∥1−∥T∥ (18)

where, is the fixed point of the iteration.

In our case and,

 T(w(n))=w(n)+δ (19)

This is true as with every iteration, the weights move closer and closer towards its equilibrium point(s) (which hopefully is unique depending on the algorithm, or all equilibrium points are almost equally preferable to us).

We assume infinitesimal adaptation noise which is due to the step size which is generally chosen to be small. This assumption is sometimes invoked to analyze iterative algorithms as in [5]. With this assumption,

 Ltδ→0∥T∥=Ltδ→0∥w(n)+δ∥=∥w(n)∥ (20)

Hence the residual may be attributed to u. Hence by the tight inequality , we can assign .

Hence our result becomes for LMS and NN approaches,

 ∥w−w∗∥1≤ |w∥m1∥u∥11−∥w∥1 (21)

For the KLMS approach, the result is,

 ∥y(n)−y(n)∗∥1≤ |y(n)|m∥u∥11−|y(n)| (22)

We can observe the tendency for the deviation to increase as goes near . Also, there is a tendency to diverge if . Hence our proposal is to use a correcting factor multiplied to the error term.

This correcting factor has the unwanted in the numerator and in the denominator. This gives a modified Widrow-Hoff paradigm as follows.

 w(n+1)=w(n)+(1+(1−∥w∥1)∥w∥1)δ(n) (23)

In Eqn. 23, the

refers to the weights of the hyperplane at time instant

.

Similarly, a modified Backpropagation Algorithm would be given by the following equation.

 w(j)=w(j)+(1+(1−∥w(j)∥1)∥w(j)∥1)δj(n) (24)

In the above Eqn. 24, denotes the weight matrix of the layer and the is found by the backpropagation algorithm in [11].

Also, a modified KLMS algorithm would be given by,

 y(n+1)=y(n)+(1+1−|y(n)||y(n)|)e(n)K (25)

Contraction principle based LASSO and Dantzig selector problems may be handled by the following equations.

 w(n+1)=w(n)+(1+(1−∥w∥1)∥w∥1)∇Jlasso (26)
 w(n+1)=w(n)+(1+(1−∥w∥1)∥w∥1)∇JDantzig (27)

Here, in Eqn. 26 and 27 denotes a subgradient of those functionals.

Similarly, the modified RLS will be given as follows,

### Iii-a Modeling the Problem in an Operator Theoretic Perspective

In this section, we give step by step details of our problem formulation based on the Contraction principle [14] for the ease of reader. Let us assume a dynamical system which evolves as follows,

 w(n+1)=T(w(n))+δ (28)

Let us assume that is an equilibrium point. If that is the case, and considering (due to assumption of it being an equilibrium point), we subtract from both sides of the equation.

 w(n+1)−w∗=T(w(n))−w∗+δ (29)

or,

 w(n+1)−w∗=T(w(n))−T(w∗)+δ (30)

We define a new variable . Hence Eqn. 30 becomes,

 ~w(n+1)=T(~w(n))+δ (31)

Thus,

 ~w(1)=T(~w(0))+δ (32)
 ~w(2)=T2(~w(0))+Tδ+δ (33)
 ~w(3)=T3(~w(0))+T2δ+Tδ+δ (34)

and so on till,

 ~w(n)=Tn(~w(0))+Tn−1δ+...Tδ+δ (35)

also,

 ~w(n+k)=Tn+k(~w(0))+Tn+k−1δ+...Tδ+δ (36)

Let us assume of large enough the algorithm reaches a fixed point such that . Hence,

 ∥~w(n)−~w(n+k)∥=∥Tn+k−1δ+...Tnδ∥ (37)

This will be upper bounded by,

 ∥~w(n)−~w(n+k)∥≤∥T∥n∥δ∥1−∥T∥ (38)

Then by definition of ,

 ∥w(n)−w(n+k)∥≤∥T∥n∥δ∥1−∥T∥ (39)

Assuming convergence at iteration we get Eqn. 18.

### Iii-B Dependence on the Indicator Variable - Modification by Normalization with Dual Norm of the Data

In a classification problem, generally we would desire,

 ⟨w,x⟩=∓1 (40)

depending on which class they belong. Here the inner product can be a linear or kernel (in which case it is implicit). Hence by Holder’s inequality,

 ∥⟨w,x⟩∥1≤∥w∥1∥x∥∞ (41)

This gives,

 sup∥x∥∞=1∥⟨w,x⟩∥1∥x∥∞=∥w∥1 (42)

In the situation that , we must divide the value of by (to follow definition of a norm given in [14]). In all previous sections, the derivations were assuming x to be in the unit circle.

Hence our norm for the is,

 ∥T∥=∥w∥1∥x∥∞ (43)

Equivalently, for the kernel case, our factor would be . Please note that we don’t need inner products and Holder’s inequality to justify our cause of normalization. Such arguments are valid only when w is a vector. However, when w is a matrix, we need the output of the operator to be bounded within the unit circle. Hence, the same normalization factor may be justified by the duality of the and norms.

### Iii-C Relationship Between Step-Size and Upper Bound on L1 norm

Let us assume that the norm is a small number at convergence. The step-size is inversely proportional to what is called the “time-constant” [5] (a measure of speed of convergence of the algorithm). Hence, if we want convergence in less than or equal to iterations and some desired floor, we give the following (conflicting) design equations,

 k1ϵμ(1−ϵ)≤N (44)
 k2μ(1−ϵ)ϵ=MSE (45)

Here and are given in [5].

### Iii-D Equivalence to LASSO

From Eqn. 21,

 ∥w−w∗∥1≤ |w∥m1∥u∥11−∥w∥1 (46)

By triangular inequality from linear algebra,

 ∥w−w∗∥1≤ ∥w∥1+∥−w∗∥1≤ |w∥m1∥u∥11−∥w∥1 (47)

As is independent of w analytically, hence minimizing the upper bound would result in minimizing the upper bound for the norm of the weights. Hence our algorithm, achieves a huge role in minimizing the upper bound(i.e. the L1 ball) in which the weights lie.

### Iii-E Useful Properties I- Concavity in w within the unit circle

Let us assume two candidate weights and .

From convexity of any norm,

 λ∥w0∥+(1−λ)∥w1∥≥∥λw0+(1−λ)w1∥ (48)

Hence as norm is positive definite (it is semi-definite but the weights of an iterative algorithm are seldom 0),

 1λ∥w0∥+(1−λ)∥w% 1∥≤1∥λw0+(1−λ)% w1∥ (49)

Subtracting 1 from both sides,

 1λ∥w0∥+(1−λ)∥w% 1∥−1≤1∥λw0+(1−λ)w1∥−1 (50)

Please note that if the parent adaptation is convex and stable, each iteration would be a contraction towards the optimal weight. Hence the norm of the (normalized) weights (see Section. III-B) should be less than or equal to unity always. Thus this proposed factor is always positive definite in normal cases.

### Iii-F Conclusion From the Above Analysis

This shows that in effect, we get a curve similar to the one shown in Figure. 1. This does not come as a surprise; such curves have been reported in the literature [15]. However, the beauty of our approach is that our step size is adjusted in such a manner that the LASSO cost function is minimized and hence has a nice “regularization” aspect to it. These good properties also help us in selecting the most optimal equilibrium point (in a least-L1 norm sense) when there are many.

## Iv Results

### Iv-a Experimental Setup

In Figure 2, a BPSK constellation was generated and passed through a channel [-0.3,0.8]. After that Additive White Gaussian Noise (AWGN) was added. This data was input to the LMS and the modified LMS algorithm.

In Figure 3, the same BPSK constellation was passed through the same 2-tap channel . After that it was passed through a non-linearity . After that noise was added.

In Figure 5, the 2-tap channel was the same one that was used in [4]. The coefficients are and the used was . The same non-linearity was applied.

For the figure in Figure. 4, the 2-tap channel was changed to and the non-linearity was changed to , just for the sake for variety. Here we increase the inter-symbol interference and reduce the non-linearity.

### Iv-B Observations

From Figure 2, we observe that the proposed algorithm converges faster to the same testing MSE than the original LMS algorithm. This is a manifestation of the superior performance of the modified error term.

From Figure 3, we again see a significantly faster convergence rate. Also, the test error reaches a lower floor for our proposed algorithm than the original KLMS algorithm.

From Figure 4

, we see that again the training and testing cost function values are much lower for our proposal. Also, we can see that our proposal is more robust. Also, the original Neural-Network configuration goes from one local minima in the testing cost-function to another during testing. However our scaling factor maintains its value in a steady manner. We understand that its value should increase after some epochs. But till 200 epochs we did not get any increase in the values which is an example of its robustness.

From Figure 5, we see that the training and testing cost function values are lower for our proposal. We see an increase in the testing MSE as a function of epochs in this case. However, it does not increase by more than in 150 epochs and stays well below the original NN curve.

From Figure 7

, we find that the contraction principle based stochastic subgradient LASSO is converging faster than the original stochastic subgradient LASSO. We used synthetic data for this particular simulation by generating random matrices from a uniform distribution and fitting a regression between them.

From Figure 6, we find that our contraction principle based RLS variant outdoes the conventional RLS algorithm. A Binary Phase Shift Keying(BPSK) constellation is randomly generated and passed through an FIR filter with coefficients . Consequently, 20dB white Gaussian noise is added. With this dataset, we compared the conventional RLS and our RLS variant with the LASSO objective as the benchmark.

It is also worth mentioning that the above curves are not instantaneous curves; they have been obtained by averaging over at least 25 monte carlo simulations.

## V Conclusion

A number of common iterative algorithms have been evaluated and compared with their newly proposed contraction principle based variants. In all the scenarios we get a performance boost after applying our modification to the parent algorithms. Also, we show a relation between our approach and the LASSO. These results, which show a lower testing error on all occasions certify the superiority of our approach.