# Generalised Mathematical Formulations for Non-Linear Optimized Scheduling

In practice, most of the optimization problems are non-linear requiring certain interactive solutions and approaches to model. In 5G Advanced and Beyond network slicing, mathematically modeling the users, type of service distributions and its adaptive SLAs are complex due to several dependencies. To facilitate the above, in this paper, we present novel Non-linear mathematical formulations and results that will form the base to achieve Optimized Scheduling.

## Authors

• 3 publications
• 3 publications
• 7 publications
• 7 publications
10/19/2015

### Energy-Efficient Scheduling for Homogeneous Multiprocessor Systems

We present a number of novel algorithms, based on mathematical optimizat...
10/22/2018

### Optimal arrangements of hyperplanes for multiclass classification

In this paper, we present a novel approach to construct multiclass clasi...
04/28/2022

### Algorithmic QUBO Formulations for k-SAT and Hamiltonian Cycles

Quadratic unconstrained binary optimization (QUBO) can be seen as a gene...
03/01/2019

### Non-linear ICA based on Cramer-Wold metric

Non-linear source separation is a challenging open problem with many app...
07/12/2020

### Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

Shapley values have become increasingly popular in the machine learning ...
07/08/2019

### General non-linear Bellman equations

We consider a general class of non-linear Bellman equations. These open ...
11/08/2013

### An Experimental Comparison of Trust Region and Level Sets

High-order (non-linear) functionals have become very popular in segmenta...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Result 1

Result 1: Estimation with graphical interpretations of the Karush Kuhn Tucker (KKT) Lagrangian multipliers .

For a generalized non-linear optimization problem, Karush Kuhn Tucker (KKT) [1][2] conditions are necessary criterion conditioned on certain regularity estimates. It starts off with formulating a Lagrangian as a function of the objectives and constraints brought together through multipliers. The objective is a quick formulation to analyze the KKT Lagrangian multipliers and its value. To estimate this, three cases are analyzed,

1. : is a maximization problem with .

2. : is a minimization problem with .

3. : is a max/min problem with .

: Consider the following Optimization problem (OP),

 P⇒{maxOx(z)⇒Ox(z)>0∀z,x=1,...,mCy(z)≥0⇒Cy(z)>0∀z,y=1,...,n (1)

where denotes the index of the objective function and constraints and is the optimization variable whose solution is to be learnt. Since is a maximization problem, formulating the KKT Lagrangian with respect to and independently. Re-expressing as a typical constraint formulation, i.e., ,

 L(z,μy)=Ox(z)−μy(−Cy(z))∀x,y (2)

where is the multiplier that satisfies the following regularity condition of .

 ▽zOx(z)>0−μy▽z(−Cy(z)>0)=0 (3)
 μy=▽zOx(z)−▽zCy(z)▽zCy(z) (4)

Eqn. , i.e., contradicts the definition and assumption of . Hence, the only solution where the satisfies the condition would be . Fig. 1 shows the graphical representation of the constraint space and estimation of its gradients. In Fig. 1, it is observed that the directions of the gradient of and are opposite in direction. This is because if the complete KKT Lagrangian function is formulated,

 Ox(z)=−n∑y=1Cy(z) (5)

Eqn. symbolizes that the objective function (or gradient of the objective) is negative of the direction of constraint function (or its gradient). This means that is not found within the cone formed by the active constraint function space for which .

: Consider the following OP,

 P⇒{minOx(z)⇒Ox(z)>0∀z,x=1,...,mCy(z)≤Wy,W>0 (6)

Since is a minimization problem, formulating the KKT Lagrangian and estimating the regularity condition,

 L(z,μy)=Ox(z)+μy(Cy(z)−Wy)∀x,y (7)

where is the upper bound on the constraints.

 ▽zOx(z)>0+μy▽z(Cy(z)>0)=0 (8)
 μy=▽zOx(z)−▽zCy(z)▽zCy(z)=0 (9)

Fig. 2(a) shows the graphical interpretation of the constraint space, objective function and its gradients for different upper bounds, i.e., . In Fig. 2(a), is also not found within the cone formed by the constraint function space for which holds true.

: Consider the following OP,

 P1⇒{maxOx(z)⇒Ox(z)>0∀z,x=1,...,mCy(z)≤Wy⇒Cy(z)>0∀z,y=1,...,n (10)

Formulating the KKT Lagrangian and estimating its regularity condition,

 L(z,μy)=Ox(z)−maxOPμy(Cy(z)−Wy)∀x,y (11)
 ▽zOx(z)>0−μy▽z(Cy(z)>0)=0 (12)
 μy=▽zOx(z)▽zCy(z)>0 (13)
 P2⇒{minOx(z)⇒Ox(z)>0∀z,x=1,...,mCy(z)≥0⇒Cy(z)>0∀z,y=1,...,n (14)

Re-expressing the constraint as . Formulating the KKT Lagrangian and estimating the regularity condition,

 L(z,μy)=Ox(z)+minOPμy(−Cy(z))∀x,y (15)
 ▽zOx(z)>0−μy▽z(Cy(z)>0)=0 (16)
 μy=▽zOx(z)▽zCy(z)>0 (17)

Fig. 2(b) and 3 shows the graphical interpretation of the constraint space, objective function and its gradients for and . If one formulates the KKT Lagrangian of or ,

 Ox(z)=n∑y=1Cy(z) (18)

Eqn. shows that the objective function sis estimated as the summation of non-negative constraint functions. This symbolizes that the gradient of will be found within the cone formed by the active constraints as shown in Fig. 3 for which . In all the above cases presented, is estimated where . Table I highlights the KKT multiplier for other cases.

: Consider the following OP,

 P⇒⎧⎨⎩maxOx(z)⇒Ox(z)>0∀z,x=1,...,mCy(z)≥0fory=1Cy(z)≤Wyfory=2,...,n (19)

Formulating the KKT Lagrangian with regularity condition,

 y=1⇒Ox(z)+(μyCy(z))=0 (20)
 y=2,...,n⇒Ox(z)−n∑y=2(μyCy(z)−Wy)=0 (21)

Fig. 4 shows the graphical representation for estimation of . It is observed that the direction of gradient of and is the same, i.e., the objective function is within the cones formed by the constraint space. This leads to . On the other hand, the direction of gradient of and are opposite for which .
Similarly, for a minimization OP: ,
:

 y=1⇒Ox(z)+(μy(−Cy(z)))=0 (22)
 y=2,...,n⇒Ox(z)+n∑y=2(μyCy(z)−Wy)=0 (23)

Fig. 5 shows the graphical estimation of KKT Lagrangian multipliers. As seen, for , , i.e., the direction of gradient of and are opposite. However, for , for which .

## Ii Result 2

Result 2: Estimation of the utility convergance multiplier for the objective functions in the MOP.

An important problem in MOP is defining a cost (or error) function by combining the optimization objectives using a scalar. The aim is to minimize the trade-offs across the objective functions, i.e., minimize the cost (or error) function over a defined variable. Let represent non-negative non-linear objectives over a resource (variable) . A cost function is defined as a linear combination of the objectives,

 E(β,ri)=n−1∑x=1βxOx(ri)+(1−n−1∑x=1βxβn)On(ri),0≤βx≤1 (24)

where are the scalars across objective functions. For a given , which minimizes the cost function . We define,

 E∗(βx)=minr∗iE(βx,ri)=E(βx,r∗i(βx)) (25)

where is the resource variable estimated at optimum . Eqn. should be convex, irrespective of the objective functions convexity. Let . Then, for any , from the concept of convexity,

 =minriE(n−1∑x=1αxβx+(1−n−1∑x=1αx)(1−n−1∑x=1βx),ri)≤n−1∑x=1αxminriE(n−1∑x=1βx,ri)E∗(βx)+(1−n−1∑x=1αx)minriE(1−n−1∑x=1βxβn,ri)E∗(βn) (26)

It may be assumed without loss of generality that for certain , such that . Then,

 E∗(β)=minri{β[O1(ri)]} (27)

So far, it has been observed that had been minimized over . If is further minimized over , the cost function might become too low. One way to ensure that is not too low such that is within a certain range is to now find the maximum cost function with respect to (since has been already used in the minimization operation). This symbolizes that the operations have been performed considering both extremes (min, max) to ensure that is within the range.

Let be the weight that now maximizes . Previously, has been minimized over . Let be the weight that now maximizes . Then,

 ∂(E∗(β))∂β=∂(E(β,r∗i))∂β+∂(E(β∗,ri))∂ri=0d(r∗i)dβ=0 (28)

since minimizes the cost function

 ∂(E(β,r∗i(β)))∂β|β=β∗=0⇒∂(β[O1(ri)])∂β=0 (29)

= 0 or for estimated at . Since , such that and always, is independent of . Hence, from

 E∗(1)=minO1(r∗i)≠0 (30)

Though might not seem to be a typical minimization (or trade-off) operation on the cost function (due to a single objective function), it has been proved (mathematically) that and are independent of as .

## References

• [1] K. Miettinen (1999). Nonlinear Multiobjective Optimization. Springer. ISBN 978-0-7923-8278-2. Retrieved 29 May 2012.
• [2] Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. p. 244.