# Minimax-optimal Inference from Partial Rankings

This paper studies the problem of inferring a global preference based on the partial rankings provided by many users over different subsets of items according to the Plackett-Luce model. A question of particular interest is how to optimally assign items to users for ranking and how many item assignments are needed to achieve a target estimation error. For a given assignment of items to users, we first derive an oracle lower bound of the estimation error that holds even for the more general Thurstone models. Then we show that the Cramér-Rao lower bound and our upper bounds inversely depend on the spectral gap of the Laplacian of an appropriately defined comparison graph. When the system is allowed to choose the item assignment, we propose a random assignment scheme. Our oracle lower bound and upper bounds imply that it is minimax-optimal up to a logarithmic factor among all assignment schemes and the lower bound can be achieved by the maximum likelihood estimator as well as popular rank-breaking schemes that decompose partial rankings into pairwise comparisons. The numerical experiments corroborate our theoretical findings.

## Authors

• 12 publications
• 39 publications
• 39 publications
• ### Top-K Ranking from Pairwise Comparisons: When Spectral Ranking is Optimal

We explore the top-K rank aggregation problem. Suppose a collection of i...

03/14/2016 ∙ by Minje Jang, et al. ∙ 0

• ### Learning from Comparisons and Choices

When tracking user-specific online activities, each user's preference is...

04/24/2017 ∙ by Sahand Negahban, et al. ∙ 0

• ### Spectral Method and Regularized MLE Are Both Optimal for Top-K Ranking

This paper is concerned with the problem of top-K ranking from pairwise ...

07/31/2017 ∙ by Yuxin Chen, et al. ∙ 0

• ### On Sample Complexity Upper and Lower Bounds for Exact Ranking from Noisy Comparisons

This paper studies the problem of finding the exact ranking from noisy c...

09/07/2019 ∙ by Wenbo Ren, et al. ∙ 0

• ### Graph Resistance and Learning from Pairwise Comparisons

We consider the problem of learning the qualities of a collection of ite...

02/01/2019 ∙ by Julien M. Hendrickx, et al. ∙ 10

• ### Modelling rankings in R: the PlackettLuce package

This paper presents the R package PlackettLuce, which implements a gener...

10/29/2018 ∙ by Heather L. Turner, et al. ∙ 0

• ### Tight Lower Bound for Comparison-Based Quantile Summaries

Quantiles, such as the median or percentiles, provide concise and useful...

05/09/2019 ∙ by Graham Cormode, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Given a set of individual preferences from multiple decision makers or judges, we address the problem of computing a consensus ranking that best represents the preference of the population collectively. This problem, known as rank aggregation, has received much attention across various disciplines including statistics, psychology, sociology, and computer science, and has found numerous applications including elections, sports, information retrieval, transportation, and marketing [1, 2, 3, 4]. While consistency of various rank aggregation algorithms has been studied when a growing number of sampled partial preferences is observed over a fixed number of items [5, 6], little is known in the high-dimensional setting where the number of items and number of observed partial rankings scale simultaneously, which arises in many modern datasets. Inference becomes even more challenging when each individual provides limited information. For example, in the well known Netflix challenge dataset, 480,189 users submitted ratings on 17,770 movies, but on average a user rated only movies. To pursue a rigorous study in the high-dimensional setting, we assume that users provide partial rankings over subsets of items generated according to the popular Plackett-Luce (PL) model [7]

from some hidden preference vector over all the items and are interested in estimating the preference vector (see def:PLModel).

Intuitively, inference becomes harder when few users are available, or each user is assigned few items to rank, meaning fewer observations. The first goal of this paper is to quantify the number of item assignments needed to achieve a target estimation error. Secondly, in many practical scenarios such as crowdsourcing, the systems have the control over the item assignment. For such systems, a natural question of interest is how to optimally assign the items for a given budget on the total number of item assignments. Thirdly, a common approach in practice to deal with partial rankings is to break them into pairwise comparisons and apply the state-of-the-art rank aggregation methods specialized for pairwise comparisons [8, 9]. It is of both theoretical and practical interest to understand how much the performance degrades when rank breaking schemes are used.

#### Notation.

For any set , let denote its cardinality. Let denote a set with elements. For any positive integer , let . We use standard big notations, e.g., for any sequences and , if there is an absolute constant such that . For a partial ranking over , i.e., is a mapping from to , let denote the inverse mapping. All logarithms are natural unless the base is explicitly specified. We say a sequence of events

holds with high probability if

for two positive constants .

### 1.1 Problem setup

We describe our model in the context of recommender systems, but it is applicable to other systems with partial rankings. Consider a recommender system with users indexed by and items indexed by . For each item , there is a hidden parameter measuring the underlying preference. Each user , independent of everyone else, randomly generates a partial ranking over a subset of items according to the PL model with the underlying preference vector .

###### Definition 1 (Pl model).

A partial ranking is generated from under the PL model in two steps: (1) independently assign each item an unobserved value exponentially distributed with mean (2) select so that .

The PL model can be equivalently described in the following sequential manner. To generate a partial ranking , first select in randomly from the distribution ; secondly, select in

with the probability distribution

; continue the process in the same fashion until all the items in are assigned. The PL model is a special case of the following class of models.

###### Definition 2 (Thurstone model, or random utility model (RUM) ).

A partial ranking is generated from under the Thurstone model for a given CDF in two steps: (1) independently assign each item an unobserved utility with CDF (2) select so that .

To recover the PL model from the Thurstone model, take to be the CDF for the standard Gumbel distribution: . Equivalently, take to be the CDF of such that has the exponential distribution with mean one. For this choice of the utility having CDF is equivalent to such that is exponentially distributed with mean The corresponding partial permutation is such that or equivalently, (Note the opposite ordering of ’s and ’s.)

Given the observation of all partial rankings over the subsets of items, the task is to infer the underlying preference vector . For the PL model, and more generally for the Thurstone model, we see that and for any are statistically indistinguishable, where is an all-ones vector. Indeed, under our model, the preference vector is the equivalence class . To get a unique representation of the equivalence class, we assume . Then the space of all possible preference vectors is given by Moreover, if becomes arbitrarily large for all , then with high probability item is ranked higher than any other item and there is no way to estimate to any accuracy. Therefore, we further put the constraint that for some and define . The parameter characterizes the dynamic range of the underlying preference. In this paper, we assume is a fixed constant. As observed in [10], if were scaled with , then it would be easy to rank items with high preference versus items with low preference and one can focus on ranking items with close preference.

We denote the number of items assigned to user by and the average number of assigned items per use by parameter may scale with in this paper. We consider two scenarios for generating the subsets : the random item assignment case where the ’s are chosen independently and uniformly at random from all possible subsets of with sizes given by the ’s, and the deterministic item assignment case where the ’s are chosen deterministically.

Our main results depend on the structure of a weighted undirected graph defined as follows.

###### Definition 3 (Comparison graph G).

Each item corresponds to a vertex . For any pair of vertices , there is a weighted edge between them if there exists a user who ranks both items and ; the weight equals .

Let denote the weighted adjacency matrix of . Let so is the number of users who rank item and without loss of generality assume . Let denote the diagonal matrix formed by and define the graph Laplacian as . Observe that

is positive semi-definite and the smallest eigenvalue of

is zero with the corresponding eigenvector given by the normalized all-one vector. Let

denote the eigenvalues of in ascending order.

#### Summary of main results.

thm:Oracle gives a lower bound for the estimation error that scales as . The lower bound is derived based on a genie-argument and holds for both the PL model and the more general Thurstone model. thm:CramerRao shows that the Cramér-Rao lower bound scales as . thm:MLEUpperBound gives an upper bound for the squared error of the maximum likelihood (ML) estimator that scales as . Under the full rank breaking scheme that decomposes a -way comparison into pairwise comparisons, thm:FBUpperBound gives an upper bound that scales as If the comparison graph is an expander graph, i.e., and our lower and upper bounds match up to a factor. This follows from the fact that , and for expanders Since the Erdős-Rényi random graph is an expander graph with high probability for average degree larger than when the system is allowed to choose the item assignment, we propose a random assignment scheme under which the items for each user are chosen independently and uniformly at random. It follows from thm:Oracle that is necessary for any item assignment scheme to reliably infer the underlying preference vector, while our upper bounds imply that is sufficient with the random assignment scheme and can be achieved by either the ML estimator or the full rank breaking or the independence-preserving breaking that decompose a -way comparison into non-intersecting pairwise comparisons, proving that rank breaking schemes are also nearly optimal.

### 1.2 Related Work

There is a vast literature on rank aggregation, and here we can only hope to cover a fraction of them we see most relevant. In this paper, we study a statistical learning approach, assuming the observed ranking data is generated from a probabilistic model. Various probabilistic models on permutations have been studied in the ranking literature (see, e.g., [11, 12]). A nonparametric approach to modeling distributions over rankings using sparse representations has been studied in [13]

. Most of the parametric models fall into one of the following three categories: noisy comparison model, distance based model, and random utility model. The noisy comparison model assumes that there is an underlying true ranking over

items, and each user independently gives a pairwise comparison which agrees with the true ranking with probability . It is shown in [14] that pairwise comparisons, when chosen adaptively, are sufficient for accurately estimating the true ranking.

The Mallows model is a distance-based model, which randomly generates a full ranking over items from some underlying true ranking with probability proportional to , where is a fixed spread parameter and can be any permutation distance such as the Kemeny distance. It is shown in [14] that the true ranking can be estimated accurately given independent full rankings generated under the Mallows model with the Kemeny distance.

In this paper, we study a special case of random utility models (RUMs) known as the Plackett-Luce (PL) model. It is shown in [7] that the likelihood function under the PL model is concave and the ML estimator can be efficiently found using a minorization-maximization (MM) algorithm which is a variation of the general EM algorithm. We give an upper bound on the error achieved by such an ML estimator, and prove that this is matched by a lower bound. The lower bound is derived by comparing to an oracle estimator which observes the random utilities of RUM directly. The Bradley-Terry (BT) model is the special case of the PL model where we only observe pairwise comparisons. For the BT model, [10] proposes RankCentrality algorithm based on the stationary distribution of a random walk over a suitably defined comparison graph and shows randomly chosen pairwise comparisons are sufficient to accurately estimate the underlying parameters; one corollary of our result is a matching performance guarantee for the ML estimator under the BT model. More recently, [15] analyzed various algorithms including RankCentrality and the ML estimator under a general, not necessarily uniform, sampling scheme.

In a PL model with priors, MAP inference becomes computationally challenging. Instead, an efficient message-passing algorithm is proposed in [16] to approximate the MAP estimate. For a more general family of random utility models, Soufiani et al. in [17, 18] give a sufficient condition under which the likelihood function is concave, and propose a Monte-Carlo EM algorithm to compute the ML estimator for general RUMs. More recently in [8, 9]

, the generalized method of moments together with the rank-breaking is applied to estimate the parameters of the

PL model and the random utility model when the data consists of full rankings.

## 2 Main results

In this section, we present our theoretical findings and numerical experiments.

### 2.1 Oracle lower bound

In this section, we derive an oracle lower bound for any estimator of . The lower bound is constructed by considering an oracle who reveals all the hidden scores in the PL model as side information and holds for the general Thurstone models.

###### Theorem 1.

Suppose are generated from the Thurstone model for some CDF For any estimator

 infˆθsupθ∗∈ΘbE[||ˆθ−θ∗||2]≥12I(μ)+2π2b2(d1+d2)n∑i=21di≥12I(μ)+2π2b2(d1+d2)(n−1)2mk,

where

is the probability density function of

, i.e., and ; the second inequality follows from the Jensen’s inequality. For the PL model, which is a special case of the Thurstone models with being the standard Gumbel distribution, .

thm:Oracle shows that the oracle lower bound scales as . We remark that the summation begins with This makes some sense, in view of the fact that the parameters need to sum to zero. For example, if is a moderate value and all the other ’s are very large, then we may be able to accurately estimate for and therefore accurately estimate The oracle lower bound also depends on the dynamic range and is tight for , because a trivial estimator that always outputs the all-zero vector achieves the lower bound.

#### Comparison to previous work

thm:Oracle implies that is necessary for any item assignment scheme to reliably infer , i.e., ensuring . It provides the first converse result on inferring the parameter vector under the general Thurstone models to our knowledge. For the Bradley-Terry model, which is a special case of the PL model where all the partial rankings reduce to the pairwise comparisons, i.e., , it is shown in [10] that is necessary for the random item assignment scheme to achieve the reliable inference based on the information-theoretic argument. In contrast, our converse result is derived based on the Bayesian Cramé-Rao lower bound [19], applies to the general models with any item assignment, and is considerably tighter if ’s are of different orders.

### 2.2 Cramér-Rao lower bound

In this section, we derive the Cramér-Rao lower bound for any unbiased estimator of

.

###### Theorem 2.

Let and denote the set of all unbiased estimators of , i.e., if and only if . If , then

 infˆθ∈Usupθ∗∈ΘbE[∥ˆθ−θ∗∥2]≥(1−1kmaxkmax∑ℓ=11ℓ)−1n∑i=21λi≥(1−1kmaxkmax∑ℓ=11ℓ)−1(n−1)2mk,

where the second inequality follows from the Jensen’s inequality.

The Cramér-Rao lower bound scales as . When is disconnected, i.e., all the items can be partitioned into two groups such that no user ever compares an item in one group with an item in the other group, and the Cramér-Rao lower bound is infinity, which is valid (and of course tight) because there is no basis for gauging any item in one connected component with respect to any item in the other connected component and the accurate inference is impossible for any estimator. Although the Cramér-Rao lower bound only holds for any unbiased estimator, we suspect that a lower bound with the same scaling holds for any estimator, but we do not have a proof.

### 2.3 Ml upper bound

In this section, we study the ML estimator based on the partial rankings. The ML estimator of is defined as , where is the log likelihood function given by

 (1)

As observed in [7], is concave in and thus the ML estimator can be efficiently computed either via the gradient descent method or the EM type algorithms.

The following theorem gives an upper bound on the error rates inversely dependent on . Intuitively, by the well-known Cheeger’s inequality, if the spectral gap becomes larger, then there are more edges across any bi-partition of , meaning more pairwise comparisons are available between any bi-partition of movies, and therefore can be estimated more accurately.

###### Theorem 3.

Assume for a sufficiently large constant in the case with . Then with high probability,

 ∥ˆθML−θ∗∥2≤⎧⎪⎨⎪⎩4(1+e2b)2λ−12√mlognIf k=2,8e4b√2mklognλ2−16e2b√λnlognIf k>2.

We compare the above upper bound with the Cramér-Rao lower bound given by thm:CramerRao. Notice that and . Therefore, and the upper bound is always larger than the Cramér-Rao lower bound. When the comparison graph is an expander and , by the well-known Cheeger’s inequality, , the upper bound is only larger than the Cramér-Rao lower bound by a logarithmic factor. In particular, with the random item assignment scheme, we show that if and as a corollary of thm:MLEUpperBound, is sufficient to ensure , proving the random item assignment scheme with the ML estimation is minimax-optimal up to a factor.

###### Corollary 1.

Suppose are chosen independently and uniformly at random among all possible subsets of . Then there exists a positive constant such that if when and when , then with high probability

 ∥ˆθML−θ∗∥2≤⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩4(1+e2b)2√n2lognm,if k=2,32e4b√2n2lognmk,if k>2.

#### Comparison to previous work

thm:MLEUpperBound provides the first finite-sample error rates for inferring the parameter vector under the PL model to our knowledge. For the Bradley-Terry model, which is a special case of the PL model with , [10] derived the similar performance guarantee by analyzing the rank centrality algorithm and the ML estimator. More recently, [15] extended the results to the non-uniform sampling scheme of item pairs, but the performance guarantees obtained when specialized to the uniform sampling scheme require at least to ensure , while our results only require .

### 2.4 Rank breaking upper bound

In this section, we study two rank-breaking schemes which decompose partial rankings into pairwise comparisons.

###### Definition 4.

Given a partial ranking over the subset of size , the independence-preserving breaking scheme (IB) breaks into non-intersecting pairwise comparisons of form such that for any and if and otherwise. The random IB chooses uniformly at random among all possibilities.

If is generated under the model, then the IB breaks into independent pairwise comparisons generated under the model. Hence, we can first break partial rankings into independent pairwise comparisons using the random IB and then apply the ML estimator on the generated pairwise comparisons with the constraint that , denoted by . Under the random assignment scheme, as a corollary of thm:MLEUpperBound, is sufficient to ensure , proving the random item assignment scheme with the random IB is minimax-optimal up to a factor in view of the oracle lower bound in thm:Oracle.

###### Corollary 2.

Suppose are chosen independently and uniformly at random among all possible subsets of with size There exists a positive constant such that if , then with high probability,

 ∥ˆθIB−θ∗∥2≤4(1+e2b)2√2n2lognmk.
###### Definition 5.

Given a partial ranking over the subset of size , the full breaking scheme (FB) breaks into all possible pairwise comparisons of form such that if and otherwise.

If is generated under the model, then the FB breaks into pairwise comparisons which are not independently generated under the model. We pretend the pairwise comparisons induced from the full breaking are all independent and maximize the weighted log likelihood function given by

 L(θ)=m∑j=112(kj−1)∑i,i′∈Sj(θiI{σ−1j(i)<σ−1j(i′)}+θi′I{σ−1j(i)>σ−1j(i′)}−log(eθi+eθi′)) (2)

with the constraint that . Let denote the maximizer. Notice that we put the weight to adjust the contributions of the pairwise comparisons generated from the partial rankings over subsets with different sizes.

###### Theorem 4.

With high probability,

 ∥ˆθFB−θ∗∥2≤2(1+e2b)2√mklognλ2.

Furthermore, suppose are chosen independently and uniformly at random among all possible subsets of . There exists a positive constant such that if , then with high probability,

 ∥ˆθFB−θ∗∥2≤4(1+e2b)2√n2lognmk.

thm:FBUpperBound shows that the error rates of inversely depend on . When the comparison graph is an expander, i.e., , the upper bound is only larger than the Cramér-Rao lower bound by a logarithmic factor. The similar observation holds for the ML estimator as shown in thm:MLEUpperBound. With the random item assignment scheme, thm:FBUpperBound imply that the FB only need to achieve the reliable inference, which is optimal up to a factor in view of the oracle lower bound in thm:Oracle.

#### Comparison to previous work

The rank breaking schemes considered in [8, 9] breaks the full rankings according to rank positions while our schemes break the partial rankings according to the item indices. The results in [8, 9] establish the consistency of the generalized method of moments under the rank breaking schemes when the data consists of full rankings. In contrast, cor:IBUpperBound and thm:FBUpperBound apply to the more general setting with partial rankings and provide the finite-sample error rates, proving the optimality of the random IB and FB with the random item assignment scheme.

### 2.5 Numerical experiments

Suppose there are items and the underlying preference vector

is uniformly distributed over

. We generate full rankings over items according to the model with parameter Fix a . We break each full ranking into partial rankings over subsets of size as follows: Let denote a partition of generated uniformly at random such that for and for all ; generate such that is the partial ranking over set consistent with . In this way, in total we generate -way comparisons which are all independently generated from the PL model. To compute the ML estimator of based on the generated partial rankings, we apply the minorization-maximation (MM) algorithm proposed in [7]. We measure the estimation error by the normalized mean square error (MSE) defined as .

We run the simulation with and . The results are depicted in Fig. 1. We also plot the Cramér-Rao limit given by as per thm:CramerRao. The oracle lower bound in thm:Oracle implies that the normalized MSE is at least . We can see that the normalized MSE approaches the Cramér-Rao limit as increases and achieves the oracle lower bound if further becomes large, suggesting the ML estimator is minimax-optimal. Moreover, with a large number of partial rankings available, i.e., is large enough, when is decreased from to , the normalized MSE increases roughly by a factor of if and if , suggesting that the random IB is minimax-optimal up to a factor. Also, we observe that the normalized MSE is not as sensitive to the value of as claimed by our upper bounds given by cor:MLEUpperBound. Notice that in the case with , according to the PL model, the item with the highest preference is ranked higher than the item with lowest preference with probability .

## 3 Proofs

We introduce some additional notations used in the proof. For a vector , let denote the usual norm. Let denote the all-one vector and denote the all-zero vector with the appropriate dimension. Let denote the set of symmetric matrices with real-valued entries. For , let denote its eigenvalues sorted in increasing order. Let denote its trace and denote its spectral norm. For two matrices , we write if is positive semi-definite, i.e., . Recall that is the log likelihood function. The first-order partial derivative of is given by

 ∇iL(θ)=∑j:i∈Sjkj−1∑ℓ=1I{σ−1j(i)≥ℓ}[I{σj(ℓ)=i}−exp(θi)exp(θσj(ℓ))+⋯+exp(θσj(kj))],∀i∈[n] (3)

and the Hessian matrix with is given by

 H(θ)=−12m∑j=1∑i,i′∈Sj(ei−ei′)(ei−ei′)⊤kj−1∑ℓ=1exp(θi+θi′)I{σ−1j(i),σ−1j(i′)≥ℓ}[exp(θσj(ℓ))+⋯+exp(θσj(kj))]2. (4)

It follows from the definition that is positive semi-definite for any . Define as

 Lj=12(kj−1)∑i,i′∈Sj(ei−ei′)(ei−ei′)⊤,

and then the Laplacian of the pairwise comparison graph satisfies .

### 3.1 Proof of thm:Oracle

We first introduce a key auxiliary result used in the proof. Let be a fixed CDF (to be used in the Thurstone model), let and suppose is a parameter to be estimated with from observation where the ’s are independent with the common CDF given by . The following proposition gives a lower bound on the average MSE for a fixed prior distribution based on Van Trees inequality [19].

###### Proposition 1.

Let be a probability density on such that and define the prior density of as . Then for any estimator of ,

 E[(Θ−T(U))2]≥1d1I(μ)+I(p0)/(b2d),

where is the probability density function of with and .

###### Proof.

It follows from the Van Trees inequality that

 E[(Θ−T(U))2]≥1∫I(θ)p(θ)dθ+I(p),

where the Fisher information and

 I(p)=∫b−b(p′(θ))2p(θ)dθ=1b2∫1−1(p′0(θ))2p0(θ)dθ=1b2I(λ0).

###### Proof of thm:Oracle.

Let be a given estimator. The minimax MSE for is greater than or equal to the average MSE for a given prior distribution on Let , then . Define . If is even we use the following prior distribution. The prior distribution of for odd is and for even, If is odd use the same distribution for through and set Note that with probability one. For simplicity, we assume is odd in the rest of this proof; the modification for even is trivial. We use the genie argument, so that the observer can see the hidden utilities in the Thurstone model. The estimation of decouples into disjoint problems, so we can focus on the estimation of

from the vector of random variables

associated with item 1 and the vector of random variables associated with item 2. The distribution functions of the ’s are all and the distribution functions of the ’s are all , and the ’s and ’s are all mutually independent given Recall that is the probability density function of , i.e., . The Fisher information for each of the observations is , so that prop:CR_exp1 carries over to this situation with Therefore, for any estimator of (the random version of ),

 E[(Θ∗1−T(U,V))2]≥1d1+d21I(μ)+π2/(b2(d1+d2))

By this reasoning, for any odd value of with we have

 E[(ˆθi−θ∗i)2]+E[(ˆθi+1−θ∗i+1)2] ≥2I(μ)+π2/(b2(d1+d2))1di+di+1 ≥12I(μ)+2π2/(b2(d1+d2))(1di+1+1di+2).

Summing over all odd values of in the range yields the theorem. Furthermore, since , by Jensen’s inequality,

### 3.2 Proof of thm:CramerRao

The Fisher information matrix is defined as and given by

 I(θ)=12m∑j=1∑i,i′∈Sj(ei−ei′)(ei−ei′)⊤kj−1∑l=1Pθ[σ−1j(i),σ−1j(i′)≥ℓ]eθi+θi′[eθσj(ℓ)+⋯+eθσj(kj)]2.

Since is positive semi-definite, it follows that is positive semi-definite. Moreover, is zero and the corresponding eigenvector is the normalized all-one vector. Fix any unbiased estimator of . Since , is orthogonal to . The Cramér-Rao lower bound then implies that Taking the supremum over both sides gives

 supθE[∥^θ−θ∥2]≥supθn∑i=21λi(I(θ))≥n∑i=21λi(I(0)).

If equals the all-zero vector, then

 P[σ−1j(i),σ−1j(i′)≥ℓ]=(kj−2)(kj−3)⋯(kj−ℓ)kj(kj−1)⋯(kj−ℓ+2)=(kj−ℓ+1)(kj−ℓ)kj(kj−1).

It follows from the definition that

 I(0)=12m∑j=1∑i,i′∈Sj(ei−ei′)(ei−ei′)⊤kj−1∑l=1kj−ℓkj(kj−1)(kj−ℓ+1)≤(1−1kmaxkmax∑ℓ=11ℓ)L.

By Jensen’s inequality,

 n∑i=21λi≥(n−1)2∑ni=2λi=(n−1)2Tr(L)=(n−1)2∑ni=1di=(n−1)2mk.

### 3.3 Proof of thm:MLEUpperBound

The main idea of the proof is inspired from the proof of [10, Theorem 4]. We first introduce several key auxiliary results used in the proof. Observe that . The following lemma upper bounds the deviation of from its mean.

###### Lemma 1.

With probability at least ,

 ∥∇L(θ∗)∥2≤√2mklogn (5)
###### Proof.

The idea of the proof is to view as the final value of a discrete time vector-valued martingale with values in Consider a user that ranks items The PL model for the ranking can be generated in a series of rounds. In the first round, the top rated item for the user is found. Suppose it is item . This contributes the term to where This contribution is a mean zero random vector in and its norm is less than one. For notational convenience, suppose In the second round, item is removed from the competition, and an item is to be selected at random from among If denotes for then the contribution of the second round for the user to is the random vector which has conditional mean zero (given ) and norm less than or equal to one. Considering all users and rounds for user , we see that is the value of a discrete-time martingale at time such that the martingale has initial value zero and increments with norm bounded by one. By the vector version of the Azuma-Hoeffding inequality found in [20, Theorem 1.8] we have

 P{∥∇L(θ∗)∥≥δ}≤2e2e−δ22m(k−1),

which implies the result. ∎

Observed that is positive semi-definite with the smallest eigenvalue equal to zero. The following lemma lower bounds its second smallest eigenvalue.

###### Lemma 2.

Fix any . Then

 λ2(−H(θ))≥⎧⎪ ⎪⎨⎪ ⎪⎩e2b(1+e2b)2λ2If k=2,14e4b(λ2−16e2b√λnlogn)If k>2, (6)

where the inequality holds with probability at least in the case with .

###### Proof.

Case : The Hessian matrix simplifies as

 H(θ)=−12m∑j=1∑i,i′∈Sj(ei−ei′)(ei−ei′)⊤exp(θi)exp(θi)+exp(θi′)exp(θi′)exp(θi)+exp(θi′).

Observe that is deterministic given . Since ,

 exp(θi)exp(θi′)[exp(θi)+exp(θi′)]2≥e2b(1+e2b)