Affinity Weighted Embedding

01/17/2013
by   Jason Weston, et al.
0

Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results.

READ FULL TEXT

page 1

page 2

page 3

research
12/11/2018

Delta Embedding Learning

Learning from corpus and learning from supervised NLP tasks both give us...
research
07/07/2020

Network Embedding with Completely-imbalanced Labels

Network embedding, aiming to project a network into a low-dimensional sp...
research
12/17/2019

A Probabilistic approach for Learning Embeddings without Supervision

For challenging machine learning problems such as zero-shot learning and...
research
10/16/2012

Latent Structured Ranking

Many latent (factorized) models have been proposed for recommendation ta...
research
10/17/2017

Learning to Learn Image Classifiers with Informative Visual Analogy

In recent years, we witnessed a huge success of Convolutional Neural Net...
research
02/24/2022

BagPipe: Accelerating Deep Recommendation Model Training

Deep learning based recommendation models (DLRM) are widely used in seve...
research
01/15/2023

Shades of Iteration: from Elgot to Kleene

Notions of iteration range from the arguably most general Elgot iteratio...

Please sign up or login with your details

Forgot password? Click here to reset