SESA: Supervised Explicit Semantic Analysis

08/10/2017
by   Dasha Bogdanova, et al.
0

In recent years supervised representation learning has provided state of the art or close to the state of the art results in semantic analysis tasks including ranking and information retrieval. The core idea is to learn how to embed items into a latent space such that they optimize a supervised objective in that latent space. The dimensions of the latent space have no clear semantics, and this reduces the interpretability of the system. For example, in personalization models, it is hard to explain why a particular item is ranked high for a given user profile. We propose a novel model of representation learning called Supervised Explicit Semantic Analysis (SESA) that is trained in a supervised fashion to embed items to a set of dimensions with explicit semantics. The model learns to compare two objects by representing them in this explicit space, where each dimension corresponds to a concept from a knowledge base. This work extends Explicit Semantic Analysis (ESA) with a supervised model for ranking problems. We apply this model to the task of Job-Profile relevance in LinkedIn in which a set of skills defines our explicit dimensions of the space. Every profile and job are encoded to this set of skills their similarity is calculated in this space. We use RNNs to embed text input into this space. In addition to interpretability, our model makes use of the web-scale collaborative skills data that is provided by users for each LinkedIn profile. Our model provides state of the art result while it remains interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2019

Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?

We propose an efficient algorithm to embed a given image into the latent...
research
05/19/2022

A Unified Collaborative Representation Learning for Neural-Network based Recommender Systems

Most NN-RSs focus on accuracy by building representations from the direc...
research
10/01/2021

Unsupervised Belief Representation Learning in Polarized Networks with Information-Theoretic Variational Graph Auto-Encoders

This paper develops a novel unsupervised algorithm for belief representa...
research
08/26/2019

Complementary-Similarity Learning using Quadruplet Network

We propose a novel learning framework to answer questions such as "if a ...
research
09/11/2019

How to make latent factors interpretable by feeding Factorization machines with knowledge graphs

Model-based approaches to recommendation can recommend items with a very...
research
05/29/2018

Lightly-supervised Representation Learning with Global Interpretability

We propose a lightly-supervised approach for information extraction, in ...
research
05/02/2023

Learning Disentangled Semantic Spaces of Explanations via Invertible Neural Networks

Disentangling sentence representations over continuous spaces can be a c...

Please sign up or login with your details

Forgot password? Click here to reset