Background. Popular recommendation systems such as collaborative filtering are based on a partially observed ratings matrix. The underlying hypothesis is that the true/latent score matrix is low-rank and we observe its partial, noisy version. Therefore, matrix completion algorithms are used for learning, cf. [8, 14, 15, 20]. In reality, however, observed preference data is not just scores. For example, clicking one of the many choices while browsing provides partial order between clicked choice versus other choices. Further, scores do convey ordinal information as well, e.g. score of 4 for paper A and score of 7 for paper B by a reviewer suggests ordering B A. Similar motivations led Samuelson to propose the Axiom of revealed preference  as the model for rational behavior. In a nutshell, it states that consumers have latent order of all objects, and the revealed preferences through actions/choices are consistent with this order. If indeed all consumers had identical ordering, then learning preference from partial preferences is effectively the question of sorting.
In practice, individuals have different orderings of interest, and further, each individual is likely to make noisy choices. This naturally suggests the following model – each individual has a latent distribution over orderings of objects of interest, and the revealed partial preferences are consistent with it, i.e. samples from the distribution. Subsequently, the preference of the population as a whole can be associated with a distribution over permutations. Recall that the low-rank structure for score matrices, as a model, tries to capture the fact that there are only a few different types of choice profile. In the context of modeling consumer choices as distribution over permutation, MultiNomial Logit (MNL) model with a small number of mixture components provides such a model.
Mixed MNL. Given objects or choices of interest, an MNL model is described as a parametric distribution over permutations of with parameters : each object , has a parameter associated with it. Then the permutations are generated randomly as follows: choose one of the objects to be ranked at random, where object is chosen to be ranked
with probability. Let be object chosen for the first position. Now to select second ranked object, choose from remaining with probability proportional to their weight. We repeat until all objects for all ranked positions are chosen. It can be easily seen that, as per this model, an item is ranked higher than with probability .
In the mixed MNL model with mixture components, each component corresponds to a different MNL model: let be the corresponding parameters of the components. Let denote the mixture distribution, i.e. . To generate a permutation at random, first choose a component with probability , and then draw random permutation as per MNL with parameters .
Brief history. The MNL model is an instance of a class of models introduced by Thurstone . The description of the MNL provided here was formally established by McFadden . The same model (in form of pair-wise marginals) was introduced by Zermelo  as well as Bradley and Terry  independently. In , Luce established that MNL is the only distribution over permutation that satisfies the axiom of Independence from Irrelevant Alternatives.
On learning distributions over permutations, the question of learning single MNL model and more generally instances of Thurstone’s model have been of interest for quite a while now. The maximum likelihood estimator, which is logistic regression for MNL, has been known to be consistent in large sample limit, cf.. Recently, RankCentrality  was established to be statistical efficient. For learning sparse mixture model, i.e. distribution over permutations with each mixture being delta distribution,  provided sufficient conditions under which mixtures can be learnt exactly using pair-wise marginals – effectively, as long as the number of components scaled as where components satisfied appropriate incoherence condition, a simple iterative algorithm could recover the mixture. However, it is not robust with respect to noise in data or finite sample error in marginal estimation. Other approaches have been proposed to recover model using convex optimization based techniques, cf. [10, 18]. MNL model is a special case of a larger family of discrete choice models known as the Random Utility Model (RUM), and an efficient algorithm to learn RUM is introduced in . Efficient algorithms for learning RUMs from partial rankings has been introduced in [3, 4]. We note that the above list of references is very limited, including only closely related literature. Given the nature of the topic, there are a lot of exciting lines of research done over the past century and we shall not be able to provide comprehensive coverage due to a space limitation.
Problem. Given observations from the mixed MNL, we wish to learn the model parameters, the mixing distribution , and parameters of each component . The observations are in form of pair-wise comparisons. Formally, to generate an observation, first one of the mixture components is chosen; and then for of all possible pairs, comparison outcome is observed as per this MNL component111We shall assume that, outcomes of these pairs are independent of each other, but coming from the same MNL mixture component. This is effectively true even they were generated by first sampling a permutation from the chosen MNL mixture component, and then observing implication of this permutation for the specific pairs, as long as they are distinct due to the Irrelevance of Independent Alternative hypothesis of Luce that is satisfied by MNL.. These pairs are chosen, uniformly at random, from a pre-determined pairs: . We shall assume that the selection of is such that the undirected graph , where , is connected.
We ask following questions of interest: Is it always feasible to learn mixed MNL? If not, under what conditions and how many samples are needed? How computationally expensive are the algorithms?
We briefly recall a recent result  that suggests that it is impossible to learn mixed MNL models in general. One such example is described in Figure 1. It depicts an example with and and a uniform mixture distribution. For the first case, in mixture component , with probability the ordering is (we denote objects by and ); and in mixture component , with probability the ordering is . Similarly for the second case, the two mixtures are made up of permutations and . It is easy to see the distribution over any -wise comparisons generated from these two mixture models is identical. Therefore, it is impossible to differentiate these two using -wise or pair-wise comparisons. In general,  established that there exist mixture distributions with over objects that are impossible to distinguish using -wise comparison data. That is, learning mixed MNL is not always possible.
Contributions. The main contribution of this work is identification of sufficient conditions under which mixed MNL model can be learnt efficiently, both statistically and computationally. Concretely, we propose a two-phase learning algorithm: in the first phase, using a tensor decomposition method for learning mixture of discrete product distribution, we identify pair-wise marginals associated with each of the mixture; in the second phase, we use these pair-wise marginals associated with each mixture to learn the parameters associated with each of the MNL mixture component.
The algorithm in the first phase builds upon the recent work by Jain and Oh . In particular, Theorem 3 generalizes their work for the setting where for each sample, we have limited information - as per , we would require that each individual gives the entire permutation; instead, we have extended the result to be able to cope with the current setting when we only have information about , potentially finite, pair-wise comparisons. The algorithm in the second phase utilizes RankCentrality . Its analysis in Theorem 4 works for setting where observations are no longer independent, as required in .
We find that as long as certain rank and incoherence conditions are satisfied by the parameters of each of the mixture, the above described two phase algorithm is able to learn mixture distribution and parameters associated with each mixture, faithfully using samples that scale polynomially in and – concretely, the number of samples required scale as with constants dependent on the incoherence between mixture components, and as long as as well as , the graph of potential comparisons, is a spectral expander with the total number of edges scaling as . For the precise statement, we refer to Theorem 1.
The algorithms proposed are iterative, and primarily based on spectral properties of underlying tensors/matrices with provable, fast convergence guarantees. That is, algorithms are not only polynomial time, they are practical enough to be scalable for high dimensional data sets.
Notations. We use for the first positive integers. We use to denote the outer product such that . Given a third order tensor and a matrix , we define a linear mapping as . We let
be the Euclidean norm of a vector,be the operator norm of a matrix, and be the Frobenius norm. We say an event happens with high probability (w.h.p) if the probability is lower bounded by such that as scales to .
2 Main result
In this section, we describe the main result: sufficient conditions under which mixed MNL models can be learnt using tractable algorithms. We provide a useful illustration of the result as well as discuss its implications.
Definitions. Let denote the collection of observations, each of which is denoted as dimensional, valued vector. Recall that each observation is obtained by first selecting one of the mixture MNL component, and then viewing outcomes, as per the chosen MNL mixture component, of randomly chosen pair-wise comparisons from the pre-determined comparisons . Let denote the th observation with if the th pair is not chosen amongst the randomly chosen pairs, and (respectively ) if (respectively ) as per the chosen MNL mixture component. By definition, it is easy to see that for any and ,
We shall denote for . Therefore, in a vector form
That is, is a matrix with columns, each representing one of the mixture components and is the mixture probability. By independence, for any , and any two different pairs ,
Therefore, the matrix or equivalently tensor is proportional to except in diagonal entries, where
being diagonal matrix with its entries being mixture probabilities, . In a similar manner, the tensor is proportional to (except in entries), where
Indeed, empirical estimates and , defined as
provide good proxy for and for large enough number of samples; and shall be utilized crucially for learning model parameters from observations.
Sufficient conditions for learning. With the above discussion, we state sufficient conditions for learning the mixed MNL in terms of properties of :
has rank ; let ,
be the largest and smallest singular values of.
For a large enough universal constant ,
In the above, represents incoherence of a symmetric matrix . We recall that for a symmetric matrix of rank, the incoherence is defined as
Note that we choose a graph to collect pairwise data on, and we want to use a graph that is connected, has a large spectral gap, and has a small number of edges. In condition (C3), we need connectivity since we cannot estimate the relative strength between disconnected components (e.g. see ). Further, it is easy to generate a graph with spectral gap bounded below by a universal constant (e.g. ) and the number of edges , for example using the configuration model for Erdös-Renyi graphs. In condition (C2), we require the matrix to be sufficiently incoherent with bounded . For example, if and the profile of each type in the mixture distribution is sufficiently different, i.e. , then we have and . We define , , and . The following theorem provides a bound on the error and we refer to the appendix for a proof.
Consider a mixed MNL model satisfying conditions (C1)-(C3). Then for any , there exists positive numerical constants such that for any positive satisfying
Algorithm 1 produces estimates and so that with probability at least ,
for all , as long as
An illustration of Theorem 1. To understand the applicability of Theorem 1, consider a concrete example with ; let the corresponding weights and be generated by choosing each weight uniformly from . In particular, the rank order for each component is a uniformly random permutation. Let the mixture distribution be uniform as well, i.e. . Finally, let the graph be chosen as per the Erdös-Rényi model with each edge chosen to be part of the graph with probability , where . For this example, it can be checked that Theorem 1 guarantees that for , , and , we have for all , and . That is, for and choosing , we need sample size of to guarantee error in both and smaller than . Instead, if we choose , we only need . Limited samples per observation leads to penalty of factor of in sample complexity. To provide bounds on the problem parameters for this example, we use standard concentration arguments. It is well known for Erdös-Rényi random graphs (see ) that, with high probability, the number of edges concentrates in implying , and the degrees also concentrate in , implying . Also using standard concentration arguments for spectrum of random matrices, it follows that the spectral gap of is bounded by w.h.p. Since we assume the weights to be in , the dynamic range is bounded by . The following Proposition shows that , , and .
For the above example, when , , , and with high probability.
Supposen now for general , we are interested in well-behaved scenario where and . To achieve arbitrary small error rate for , we need , which is achieved by sample size with .
We describe the algorithm achieving the bound in Theorem 1
. Our approach is two-phased. First, learn the moments for mixtures using a tensor decomposition, cf. Algorithm2: for each type , produce estimate of the mixture weight and estimate of the expected outcome defined as in (1). Secondly, for each , using the estimate , apply RankCentrality, cf. Section 3.2, to estimate for the MNL weights .
To achieve Theorem 1, and is sufficient. Next, we describe the two phases of algorithms and associated technical results.
3.1 Phase 1: Spectral decomposition.
The next theorem shows that and are sufficient to learn and exactly, when has rank (throughout, we assume that ).
Theorem 2 (Theorem 3.1 ).
Let have rank . Then there exists an
. Then there exists an orthogonal matrixand eigenvalues , such that the orthogonal tensor decomposition of is
Let . Then the parameters of the mixture distribution are
The main challenge in estimating (resp. ) from empirical data are the diagonal entires. In , alternating minimization approach is used for matrix completion to find the missing diagonal entries of , and used a least squares method for estimating the tensor directly from the samples. Let denote the set of off-diagonal indices for an matrix and denote the off-diagonal entries of an tensor such that the corresponding projections are defined as
for and .
In lieu of above discussion, we shall use and to obtain estimation of diagonal entries of and respectively. To keep technical arguments simple, we shall use first samples based , denoted as and second samples based , denoted by in Algorithm 2.
Next, we state correctness of Algorithm 2 when is small; proof is in Appendix.
3.2 Phase 2: RankCentrality.
Recall that represents collection of pairs and is the corresponding graph. Let denote the estimation of for the mixture component ; where is defined as per (1). For each , using and , we shall use the RankCentrality  to obtain estimation of . Next we describe the algorithm and guarantees associated with it.
Without loss of generality, we can assume that is such that for all . Given this normalization, RankCentrality estimates
as stationary distribution of an appropriate Markov chain on. The transition probabilities are for all . For , they are function of . Specifically, transition matrix with if , and for for ,
Finally, for all . Let be a stationary distribution of the Markov chain defined by . That is,
Computationally, we suggest obtaining estimation of by using power-iteration for iterations. As argued before, cf. , , is sufficient to obtain reasonably good estimation of .
The underlying assumption here is that there is a unique stationary distribution, which is established by our result under the conditions of Theorem 1. Now is an approximation of the ideal transition probabilities, where where if and for all . Such an ideal Markov chain is reversible and as long as is connected (which is, in our case, by choice), the stationary distribution of this ideal chain is (recall, we have assumed to be normalized so that all its components up to ).
Now is an approximation of such an ideal transition matrix . In what follows, we state result about how this approximation error translates into the error between and . Recall that , and are maximum and minimum vertex degrees of and as defined in (9).
Let be non-bipartite and connected. Let for some positive . Then, for some positive universal constant ,
And, starting from any initial condition, the power iteration manages to produce an estimate of within twice the above stated error bound in iterations.
Proof of the above result can be found in Appendix. For spectral expander (e.g. connected Erdos-Renyi graph with high probability), and therefore the bound is effectively for bounded dynamic range, i.e. .
Learning distribution over permutations of objects from partial observation is fundamental to many domains. In this work, we have advanced understanding of this question by characterizing sufficient conditions and associated algorithm under which it is feasible to learn mixed MNL model in computationally and statistically efficient (polynomial in problem size) manner from partial/pair-wise comparisons. The conditions are natural – the mixture components should be “identifiable” given partial preference/comparison data – stated in terms of full rank and incoherence conditions of the second moment matrix. The algorithm allows learning of mixture components as long as number of mixture components scale