SOLAR: Sparse Orthogonal Learned and Random Embeddings

08/30/2020
by   Tharun Medini, et al.
11

Dense embedding models are commonly deployed in commercial search engines, wherein all the document vectors are pre-computed, and near-neighbor search (NNS) is performed with the query vector to find relevant documents. However, the bottleneck of indexing a large number of dense vectors and performing an NNS hurts the query time and accuracy of these models. In this paper, we argue that high-dimensional and ultra-sparse embedding is a significantly superior alternative to dense low-dimensional embedding for both query efficiency and accuracy. Extreme sparsity eliminates the need for NNS by replacing them with simple lookups, while its high dimensionality ensures that the embeddings are informative even when sparse. However, learning extremely high dimensional embeddings leads to blow up in the model size. To make the training feasible, we propose a partitioning algorithm that learns such high dimensional embeddings across multiple GPUs without any communication. This is facilitated by our novel asymmetric mixture of Sparse, Orthogonal, Learned and Random (SOLAR) Embeddings. The label vectors are random, sparse, and near-orthogonal by design, while the query vectors are learned and sparse. We theoretically prove that our way of one-sided learning is equivalent to learning both query and label embeddings. With these unique properties, we can successfully train 500K dimensional SOLAR embeddings for the tasks of searching through 1.6M books and multi-label classification on the three largest public datasets. We achieve superior precision and recall compared to the respective state-of-the-art baselines for each of the tasks with up to 10 times faster speed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2021

IRLI: Iterative Re-partitioning for Learning to Index

Neural models have transformed the fundamental information retrieval pro...
research
02/23/2018

High-Dimensional Vector Semantics

In this paper we explore the "vector semantics" problem from the perspec...
research
02/12/2018

Revisiting the Vector Space Model: Sparse Weighted Nearest-Neighbor Method for Extreme Multi-Label Classification

Machine learning has played an important role in information retrieval (...
research
01/28/2013

An alternative text representation to TF-IDF and Bag-of-Words

In text mining, information retrieval, and machine learning, text docume...
research
02/17/2023

Binary Embedding-based Retrieval at Tencent

Large-scale embedding-based retrieval (EBR) is the cornerstone of search...
research
10/28/2019

Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

In the last decade, it has been shown that many hard AI tasks, especiall...
research
12/10/2014

Memory vectors for similarity search in high-dimensional spaces

We study an indexing architecture to store and search in a database of h...

Please sign up or login with your details

Forgot password? Click here to reset