Assessing top-k preferences
Assessors make preference judgments faster and more consistently than graded relevance judgments. Preference judgments can also recognize distinctions between items that appear equivalent under graded judgments. Unfortunately, preference judgments can require more than linear effort to fully order a pool of items, and evaluation measures for preference judgments are not as well established as those for graded judgments, such as NDCG. In this paper, we explore the assessment process for partial preference judgments, with the aim of identifying and ordering the top items in the pool, rather than fully ordering the entire pool. To measure the performance of a ranker, we compare its output to this preferred ordering by applying a rank similarity measure. We demonstrate the practical feasibility of this approach by crowdsourcing partial preferences for the TREC 2019 Conversational Assistance Track, replacing NDCG with a new measure that can reflect factors beyond relevance. This new measure has its most striking impact when comparing traditional IR techniques to modern neural rankers, where NDCG can fail to recognize significant differences exposed by this new measure.
READ FULL TEXT