On Orthogonal Projections on the Space of Consistent Pairwise Comparisons Matrices

02/16/2020 ∙ by W. W. Koczkodaj, et al. ∙ AGH Laurentian University 0

In this study, the orthogonalization process for different inner products is applied to pairwise comparisons. Properties of consistent approximations of a given inconsistent pairwise comparisons matrix are examined. A method of a derivation of a priority vector induced by a pairwise comparison matrix for a given inner product has been introduced. The mathematical elegance of orthogonalization and its universal use in most applied sciences has been the motivating factor for this study. However, the finding of this study that approximations depend on the inner product assumed, is of considerable importance.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The growing number of various orthogonalization approaches in [1, 2, 3, 4] supports the importance of orthogonalization in various computer science applications. Pairwise comparisons allow us to express assessments of many entities (especially, of the subjective nature) into one value for the use in the decision making process. Pairwise comparisons have been used since the late years in the 13th century by Llull for conducting the better election process (as stipulated in [5]). However, the ineffability of pairwise comparisons comes from decision making which must have been made by our ancestors during the Stone Age. Two stones must have been compared to decide which of them fit for the purpose. It could be for a hatchet, a gift, or a decoration.

Pairwise comparisons matrices can be transformed by a logarithmic mapping into a linear space and the set of consistent matrices into its subspace. The structure of a Hilbert space is obtained by using an inner product. Such a space is complete with respect to the norm corresponding to the inner product. In such a space, we may use orthogonal projections as a tool to produce a consistent approximation of a given pairwise comparison matrix.

Structure of the paper

A gentle introduction to pairwise comparisons is provided in Section 2. Section 3 discusses the problem of approximation of an inconsistent PC matrix by a consistent PC matrix using Frobenius inner product on the space of matrices. Other inner products are discussed in Section 4. In Section 5 the dependence of an optimal priority vector on the choice of an inner product on the space of pairwise comparison matrices has been proved. The Conclusions are self explanatory.

2 Pairwise comparisons matrices

In this subsection, we define a pairwise comparisons matrix (for short, PC matrix) and introduce some related notions. Pairwise comparisons are traditionally stored in a PC matrix. It is a square matrix with real positive elements for every where represents a relative preference of an entity over as a ratio. The entity could be an object, attribute of it, abstract concept, or a stimulus. For most abstract entities, we do not have a well established measure such as a meter or kilogram. “Software safety” or “environmental friendliness” are examples of such entities or attributes used in pairwise comparisons.

When we use a linguistic expression containing "how many times", we process ratios. The linguistic expression "by how much", "by how much percent" (or similar) gives us a relative difference. Ratios often express subjective preferences of two entities, however, it does not imply that they can be obtained only by division. In fact, equalizing the ratios with the division (e.g., ), for pairwise comparisons, is in general unacceptable. It is only acceptable when applied to entities with the existing units of measure (e.g., distance). However, when entities are subjective (e.g., reliability and robustness commonly used in a software development process as product attributes), the division operation has no mathematical meaning although we can still consider which of them is more (or less) important than the other for a given project. The use of the symbol "/" is in the context of "related to" (not the division of two numbers). Problems with some popular customization of PCs have been addressed in [8]. We decided not to address them here.

A PC matrix is called reciprocal if for every . In such case, for every .

We can assume that the PC matrix has positive real entries and is reciprocal without the loss of generality since a non reciprocal PC matrix can be made reciprocal by the theory presented in [9]. The conversion is done by replacing and

with geometric means of

and (). The reciprocal value is .

Thus a PC matrix is the -matrix of the form:

Sometimes, we write that in order to indicate the size of a given PC matrix.

2.1 The Geometric Means Method

The main goal to use a pairwise comparison matrix is to obtain the so called priority vector. The coordinates of this vector correspond to the weights of alternatives. If we know the priority vector, we can set alternatives in order from the best to the worst one.

In the Geometric Means Method (GMM) introduced in [10] the coordinates of the vector are calculated as the geometric means of the elements in rows of the matrix:


The above vector is the solution of the Logarithmic Least Square Method.

2.2 Triads, transitivity, and submatrices of a PC matrix

One of the fundamental problems in pairwise comparisons is the inconsistency. It takes place when we provide, for any reason, all (hence supernumerary) comparisons of entities which is or if the reciprocity is assumed and used to reduce the number of entered comparisons. The sufficient number of comparisons is , as stipulated in [11], but this number is based on some arbitrary selection criteria of the minimal set of entities to compare. In practice, we have a tendency to make all comparisons (when reciprocity is assumed which is expressed by property also not always without its problem). Surprisingly, the equality does not take place even if both and . For example, the blind wine testing may result in claiming that is better than and is better than or even that is better than which is placed on the main diagonal in a PC matrix , expressing all pairwise comparisons in a form of a matrix.

The basic concept of inconsistency may be illustrated as follows. If an alternative is three times better than , and is twice better than , than should not be evaluated as five times better than C. Unfortunately, it does not imply that to should be hence 6, as the common sense may dictate, since all three assessments (3, 5, and 2) may be inaccurate and we do not know which one of them is or not incorrect. Inconsistency is sometimes mistakenly taken for the approximation error but it is incorrect. For example, triad can be approximated by with 0 inconsistency but we can see that such approximation is far from optimal by any standard. So, the inconsistency can be 0 yet the approximation error can be different than 0 and of arbitrarily large value.

2.3 Multiplicative variant of pairwise comparisons

Definition 2.1.

Given , we define

as the set of all PC matrix indexes of all permissible triads in the upper triangle.

Definition 2.2.

A PC matrix is called consistent (or transitive) if, for every


Equation (2) was proposed a long time ago (in 1930s) and it is known as a "consistency condition". Every consistent PC matrix is reciprocal, however, the converse is false in general. If the consistency condition does not hold, the PC matrix is inconsistent (or intransitive). In several studies, conducted between 1940 and 1961 ([12, 13, 14, 15]) the inconsistency in pairwise comparisons was defined and examined.

Inconsistency in pairwise comparisons occurs due to superfluous input data. As demonstrated in [11], only pairwise comparisons are really needed to create the entire PC matrix for entities, while the upper triangle has comparisons. Inconsistencies are not necessarily "wrong" as they can be used to improve the data acquisition. However, there is a real necessity to have a "measure" for it.

Lemma 2.3.

If a matrix is consistent, then

where for every .


By the definition of and consistency of one gets



It is easy to observe that the set of all consistent matrices is a multiplicative subgroup of the group of all -matrices endowed with the coordinate-wise multiplication where and . Its representation in consists of all priority vectors defined uniquely as in Lemma 2.3, up to a multiplicative constant In the following we use priority vectors normalized by the condition unless otherwise stated.

2.4 Additive variant of pairwise comparisons

Instead of a PC matrix with the set of positive real numbers considered with multiplication, we can transform entries of by a logarithmic function and get a matrix Since a matrix is reciprocal, it follows that it is anti-symmetric, i.e.

Moreover, if is consistent then satisfies the condition of additive consistency:

which yields the following well-known representation.

Lemma 2.4.

If an anti-symmetric matrix is additively consistent, then

where is arbitrary and for every .

In view of this representation, the set of all additively consistent matrices is an additive subgroup of all - matrices, whenever it is endowed with the coordinatewise matrix addition of and . It is a one-to-one image of the multiplicative group by the group isomorphism . The inverse group isomorphism is clearly given by the formula Moreover, the additive priority vector of satisfies where is supposed to be arbitrary additive constant. In particular, it is said to be normalized if Here and in the following matrix functions and are always understood in the coordinate- wise sense.

3 Approximation by projections

Numerous heuristics have been proposed for approximations of inconsistent pairwise comparisons matrices by consistent pairwise comparisons matrices. Geometric means (GM) of rows is regarded as dominant. Some mathematical evidence, to support GM as the method of choice, was also provided in

[16]. [17] shows that orthogonal projections have a limit which is GM (to a constant). [18] demonstrates that the inconsistency reduction algorithm based on the orthogonal projections converges very quickly for practical applications. The proof of inconsistency convergence was outlined in [6] and finalized in [7]. Axiomatization of inconsistency still remains elusive. Its recent mutation in [22] has a deficiency (the monotonicity axiom incorrectly defined).

3.1 Space of consistent matrices

Let = Let be the set of all-matrices with entries from the field and let be the set of all consistent -matrices with entries from the field. We consider as a linear space with addition of matrices and multiplication by numbers from the field clearly = and the unit matrices

form a basis in where is equal to 1, if and and otherwise 0.

In the linear space one can define the Frobenius inner product as follows. For all

In this Section we recall results from [19].

Theorem 3.1.

The set is a linear subspace of


Let that is

Let then


Let and It is clear that

Theorem 3.2.

The subspace has dimension over


By applying the consistency condition, all elements of the matrix can be generated by elements for i.e. by the second diagonal, that is diagonal directly above the main diagonal (see [11]). ∎

Theorem 3.3 ([19, Proposition 1]).

The following set of matrices constitutes a basis of



For the standard inner product (i.e. Frobenius), an example of approximation of a inconsistent matrix as a projection onto is given in [19].

3.2 Approximation by a consistent matrix

Suppose that we have a matrix i.e. is inconsistent. Our aim is to find a consistent metric projection of onto the set with respect to norm induced by an inner product , i.e. a nonlinear mapping such that the distance of to

is attained by the matrix

In the additive case metric projection coincides with the orthogonal projection of onto the -linear subspace which is characterized by the well-known orthogonality condition

This condition enables to compute the orthogonal projection much more effectively than its nonlinear multiplicative counterpart . Therefore, it was proposed [10, 17]

to linearize the process of determining metric projections for practical applications. It was achieved by introducing a new concept of linearized consistent approximations to estimate nonlinear metric projections. For the simplicity, in the following the symbol

will be also used to denote these linearized consistent approximations. It would not lead to misunderstanding, since we shall always restrict our attention to the linearized case, unless otherwise stated.

Definition 3.4.

Let be a inconsistent matrix.

A consistent approximation of onto is defined in the following way:

  1. we construct the matrix

  2. we find the orthogonal projection of onto the -dimensional subspace

  3. we set

In short, we define

3.3 Orthogonalization

In order to simplify calculation in the examples below, we would like to have orthogonal basis for

We produce such a basis by the Gram-Schmidt process. Namely, let

be an -dimensional vector space over with an inner product and be its basis. We construct an orthogonal basis as follows:

Example 3.5.

Consider an inconsistent PC matrix in the multiplicative variant:


Its priority vector obtained by (1) is


Taking natural logarithms, we switch to the additive PC matrix variant and get the following additive PC matrix:

We need to find , the projection of onto By Theorem 3.2, we have that By Theorem 3.3, we get a basis of the linear space of consistent matrices

Evidently, Therefore, we have to apply Gram-Schmidt process of orthogonalization (3). If denotes an orthogonal basis of then

Our goal is to find that is to find coefficients and such that for every which is equivalent to solving:

Since and are orthogonal, we get a system of linear equations:

By computing Frobenius inner products, we get the following equation:

By solving the above equations for we get and Thus,

Finally, we get a consistent approximation for

Notice that the priority vector coincides with given by (5).

4 Other inner products on

The standard (Frobenius) inner product on the linear space is defined by:


The above inner product is exactly the Frobenius inner product defined in previous section, and it defines the Frobenius norm in a usual way by:

In [20] the following result is mentioned:

Proposition 4.1.

For every and positive semi-definite matrices , the following function:


defines an inner product in


All properties of an inner product follow from the following equation:

Example 4.2.

Consider the following four matrices in the space

By applying Sylvester’s criterion in [21], it is easy to see that they are positive semi-definite. Evidently, they are symmetric hence Hermitian.



By Proposition 4.1, is an inner product in

Example 4.3.

Consider matrices , with real entries computed by the formula in Theorem 3.3 (see Example 3.5 for details). Evidently, is a basis for By applying Gram-Schmidt process (3) with the inner product from Example 4.2 to the basis , we get an orthogonal basis for in

The above transformations imply that . Since

we have

By equations (3), we get

Example 4.4.

Take the following additive PC matrix:

This is the PC matrix from Example 3.5. Next, we compute the orthogonal (with respect to the inner product from Example 4.2) projection onto the space For it, we need to solve a system of linear equations for and :


We get

We can also utilize some computation conducted in the previous example and by using the symmetry of the inner product the equation (8) becomes:

Consequently, therefore, we get:

Finally, we obtain the following multiplicative PC matrix:

Example 4.5.

Let us repeat the calculations made in Examples 4.2, 4.3 and 4.4 to provide a consistent approximation of the matrix set in (4) by means of the inner product induced by matrices:

We obtain


By equations (3), we get


we calculate the inner products

By solving the equations

we get and therefore,


and its priority vector calculated with the use of GMM is equal to

5 Approximation selection

It is worthwhile to stress that in the previous examples we got three approximations of the same matrix . An important dilemma has surfaced: how to compare different approximations of a given PC matrix obtained by the use of different inner products? The answer to this question is: they are incomparable.

5.1 Inconsistency

The first criterion that we took into consideration was to compare inconsistency indices of the exponential transformations of differences . However, this attempt appeared to be incorrect.

Let us consider the inconsistency index of a pairwise comparison matrix given by formula:


This indicator satisfies all the desired axioms formulated in [22].

Theorem 5.1.

Let and be additive pairwise comparison matrices such that is additively consistent. Then


Take any . Since , we get

which completes the proof. ∎

From the above theorem it follows that if we take two different consistent approximations and of an additive matrix they satisfy

5.2 Priority vectors for different inner products

The second attempt to judge whether a consistent approximation of a matrix is acceptable could be to compare the priority vectors induced by and for any inner product. In [10] it has been proved that the elements of a projection matrix induced by a Frobenius product are given by the ratios , where vector is obtained by GMM. As it has been shown in [17] the priority vectors induced by and in this case coincide:

Theorem 5.2.

Let be a PC matrix and , where , i.e.

Then .

As the following example shows the priority vectors of a matrix and its consistent approximation may differ if we use other inner products.

Example 5.3.

Consider an inconsistent additive PC matrix from Example 3.5:

and its corresponding multiplicative PC matrix . Let us take three inner products: Frobenius product and the inner products and from Examples 4.2 and  4.5. The approximations , and are given in Examples  3.5,  4.3 and  4.5, respectively.

Notice that

but and are linearly independent. This observation, however, is not surprising. The matrix minimizes the distance from to the set of cosistent PC matrices according to the inner product , but not to the Frobenius inner product.

In the following we show that as we change the inner product, we also have to change the formula for a priority vector. It is done by extending Theorem 5.2 to weighted Frobenius inner products. For this purpose we recall the most general standard definition of an inner product in

Let be linearly independent matrices in the space . Represent matrices in a unique manner as