Generalised Mathematical Formulations for Non-Linear Optimized Scheduling

04/09/2022
by   Sharvari Ravindran, et al.
IIIT Bangalore
I-MACX Studios
0

In practice, most of the optimization problems are non-linear requiring certain interactive solutions and approaches to model. In 5G Advanced and Beyond network slicing, mathematically modeling the users, type of service distributions and its adaptive SLAs are complex due to several dependencies. To facilitate the above, in this paper, we present novel Non-linear mathematical formulations and results that will form the base to achieve Optimized Scheduling.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/19/2015

Energy-Efficient Scheduling for Homogeneous Multiprocessor Systems

We present a number of novel algorithms, based on mathematical optimizat...
10/22/2018

Optimal arrangements of hyperplanes for multiclass classification

In this paper, we present a novel approach to construct multiclass clasi...
04/28/2022

Algorithmic QUBO Formulations for k-SAT and Hamiltonian Cycles

Quadratic unconstrained binary optimization (QUBO) can be seen as a gene...
03/01/2019

Non-linear ICA based on Cramer-Wold metric

Non-linear source separation is a challenging open problem with many app...
07/12/2020

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

Shapley values have become increasingly popular in the machine learning ...
07/08/2019

General non-linear Bellman equations

We consider a general class of non-linear Bellman equations. These open ...
11/08/2013

An Experimental Comparison of Trust Region and Level Sets

High-order (non-linear) functionals have become very popular in segmenta...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Result 1

Result 1: Estimation with graphical interpretations of the Karush Kuhn Tucker (KKT) Lagrangian multipliers .

For a generalized non-linear optimization problem, Karush Kuhn Tucker (KKT) [1][2] conditions are necessary criterion conditioned on certain regularity estimates. It starts off with formulating a Lagrangian as a function of the objectives and constraints brought together through multipliers. The objective is a quick formulation to analyze the KKT Lagrangian multipliers and its value. To estimate this, three cases are analyzed,

  1. : is a maximization problem with .

  2. : is a minimization problem with .

  3. : is a max/min problem with .

: Consider the following Optimization problem (OP),

(1)

where denotes the index of the objective function and constraints and is the optimization variable whose solution is to be learnt. Since is a maximization problem, formulating the KKT Lagrangian with respect to and independently. Re-expressing as a typical constraint formulation, i.e., ,

(2)

where is the multiplier that satisfies the following regularity condition of .

(3)
Fig. 1: : Graphical interpretation of KKT Lagrangian multiplier estimation for maximization OPs
Fig. 2: Graphical interpretation of KKT Lagrangian multiplier estimation for minimization OP (objective) (a) : , (b) :
Fig. 3: : Graphical interpretation of KKT Lagrangian multiplier estimation for maximization OP (objective)
TABLE I: KKT Lagrangian multipliers for special cases
Fig. 4: : Graphical interpretation of KKT Lagrangian multiplier estimation for maximization OP (objective)
Fig. 5: : Graphical interpretation of KKT Lagrangian multiplier estimation for minimization OP (objective)
(4)

Eqn. , i.e., contradicts the definition and assumption of . Hence, the only solution where the satisfies the condition would be . Fig. 1 shows the graphical representation of the constraint space and estimation of its gradients. In Fig. 1, it is observed that the directions of the gradient of and are opposite in direction. This is because if the complete KKT Lagrangian function is formulated,

(5)

Eqn. symbolizes that the objective function (or gradient of the objective) is negative of the direction of constraint function (or its gradient). This means that is not found within the cone formed by the active constraint function space for which .

: Consider the following OP,

(6)

Since is a minimization problem, formulating the KKT Lagrangian and estimating the regularity condition,

(7)

where is the upper bound on the constraints.

(8)
(9)

Fig. 2(a) shows the graphical interpretation of the constraint space, objective function and its gradients for different upper bounds, i.e., . In Fig. 2(a), is also not found within the cone formed by the constraint function space for which holds true.

: Consider the following OP,

(10)

Formulating the KKT Lagrangian and estimating its regularity condition,

(11)
(12)
(13)
(14)

Re-expressing the constraint as . Formulating the KKT Lagrangian and estimating the regularity condition,

(15)
(16)
(17)

Fig. 2(b) and 3 shows the graphical interpretation of the constraint space, objective function and its gradients for and . If one formulates the KKT Lagrangian of or ,

(18)

Eqn. shows that the objective function sis estimated as the summation of non-negative constraint functions. This symbolizes that the gradient of will be found within the cone formed by the active constraints as shown in Fig. 3 for which . In all the above cases presented, is estimated where . Table I highlights the KKT multiplier for other cases.

: Consider the following OP,

(19)

Formulating the KKT Lagrangian with regularity condition,

(20)
(21)

Fig. 4 shows the graphical representation for estimation of . It is observed that the direction of gradient of and is the same, i.e., the objective function is within the cones formed by the constraint space. This leads to . On the other hand, the direction of gradient of and are opposite for which .
Similarly, for a minimization OP: ,
:

(22)
(23)

Fig. 5 shows the graphical estimation of KKT Lagrangian multipliers. As seen, for , , i.e., the direction of gradient of and are opposite. However, for , for which .

Ii Result 2

Result 2: Estimation of the utility convergance multiplier for the objective functions in the MOP.

An important problem in MOP is defining a cost (or error) function by combining the optimization objectives using a scalar. The aim is to minimize the trade-offs across the objective functions, i.e., minimize the cost (or error) function over a defined variable. Let represent non-negative non-linear objectives over a resource (variable) . A cost function is defined as a linear combination of the objectives,

(24)

where are the scalars across objective functions. For a given , which minimizes the cost function . We define,

(25)

where is the resource variable estimated at optimum . Eqn. should be convex, irrespective of the objective functions convexity. Let . Then, for any , from the concept of convexity,

(26)

It may be assumed without loss of generality that for certain , such that . Then,

(27)

So far, it has been observed that had been minimized over . If is further minimized over , the cost function might become too low. One way to ensure that is not too low such that is within a certain range is to now find the maximum cost function with respect to (since has been already used in the minimization operation). This symbolizes that the operations have been performed considering both extremes (min, max) to ensure that is within the range.

Let be the weight that now maximizes . Previously, has been minimized over . Let be the weight that now maximizes . Then,

(28)

since minimizes the cost function

(29)

= 0 or for estimated at . Since , such that and always, is independent of . Hence, from

(30)

Though might not seem to be a typical minimization (or trade-off) operation on the cost function (due to a single objective function), it has been proved (mathematically) that and are independent of as .

References

  • [1] K. Miettinen (1999). Nonlinear Multiobjective Optimization. Springer. ISBN 978-0-7923-8278-2. Retrieved 29 May 2012.
  • [2] Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. p. 244.