Revisiting the Vector Space Model: Sparse Weighted Nearest-Neighbor Method for Extreme Multi-Label Classification

02/12/2018
by   Tatsuhiro Aoshima, et al.
0

Machine learning has played an important role in information retrieval (IR) in recent times. In search engines, for example, query keywords are accepted and documents are returned in order of relevance to the given query; this can be cast as a multi-label ranking problem in machine learning. Generally, the number of candidate documents is extremely large (from several thousand to several million); thus, the classifier must handle many labels. This problem is referred to as extreme multi-label classification (XMLC). In this paper, we propose a novel approach to XMLC termed the Sparse Weighted Nearest-Neighbor Method. This technique can be derived as a fast implementation of state-of-the-art (SOTA) one-versus-rest linear classifiers for very sparse datasets. In addition, we show that the classifier can be written as a sparse generalization of a representer theorem with a linear kernel. Furthermore, our method can be viewed as the vector space model used in IR. Finally, we show that the Sparse Weighted Nearest-Neighbor Method can process data points in real time on XMLC datasets with equivalent performance to SOTA models, with a single thread and smaller storage footprint. In particular, our method exhibits superior performance to the SOTA models on a dataset with 3 million labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2019

Supervised Learning Approach to Approximate Nearest Neighbor Search

Approximate nearest neighbor search is a classic algorithmic problem whe...
research
08/31/2016

A Novel Online Real-time Classifier for Multi-label Data Streams

In this paper, a novel extreme learning machine based online multi-label...
research
12/17/2019

An Embarrassingly Simple Baseline for eXtreme Multi-label Prediction

The goal of eXtreme Multi-label Learning (XML) is to design and learn a ...
research
08/30/2020

SOLAR: Sparse Orthogonal Learned and Random Embeddings

Dense embedding models are commonly deployed in commercial search engine...
research
06/04/2021

Accelerating Inference for Sparse Extreme Multi-Label Ranking Trees

Tree-based models underpin many modern semantic search engines and recom...
research
03/17/2021

IRLI: Iterative Re-partitioning for Learning to Index

Neural models have transformed the fundamental information retrieval pro...

Please sign up or login with your details

Forgot password? Click here to reset