Training with Multi-Layer Embeddings for Model Reduction

06/10/2020
by   Benjamin Ghaemmaghami, et al.
0

Modern recommendation systems rely on real-valued embeddings of categorical features. Increasing the dimension of embedding vectors improves model accuracy but comes at a high cost to model size. We introduce a multi-layer embedding training (MLET) architecture that trains embeddings via a sequence of linear layers to derive superior embedding accuracy vs. model size trade-off. Our approach is fundamentally based on the ability of factorized linear layers to produce superior embeddings to that of a single linear layer. We focus on the analysis and implementation of a two-layer scheme. Harnessing the recent results in dynamics of backpropagation in linear neural networks, we explain the ability to get superior multi-layer embeddings via their tendency to have lower effective rank. We show that substantial advantages are obtained in the regime where the width of the hidden layer is much larger than that of the final embedding (d). Crucially, at conclusion of training, we convert the two-layer solution into a single-layer one: as a result, the inference-time model size scales as d. We prototype the MLET scheme within Facebook's PyTorch-based open-source Deep Learning Recommendation Model. We show that it allows reducing d by 4-8X, with a corresponding improvement in memory footprint, at given model accuracy. The experiments are run on two publicly available click-through-rate prediction benchmarks (Criteo-Kaggle and Avazu). The runtime cost of MLET is 25 average.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2020

A Greedy Algorithm for Quantizing Neural Networks

We propose a new computationally efficient method for quantizing the wei...
research
07/10/2019

Neural Input Search for Large Scale Recommendation Models

Recommendation problems with large numbers of discrete items, such as pr...
research
09/25/2014

A Deep Graph Embedding Network Model for Face Recognition

In this paper, we propose a new deep learning network "GENet", it combin...
research
09/10/2018

Approximation and Estimation for High-Dimensional Deep Learning Networks

It has been experimentally observed in recent years that multi-layer art...
research
06/22/2023

Vec2Vec: A Compact Neural Network Approach for Transforming Text Embeddings with High Fidelity

Vector embeddings have become ubiquitous tools for many language-related...
research
11/11/2017

Learning Document Embeddings With CNNs

We propose a new model for unsupervised document embedding. Existing app...
research
02/24/2021

Semantically Constrained Memory Allocation (SCMA) for Embedding in Efficient Recommendation Systems

Deep learning-based models are utilized to achieve state-of-the-art perf...

Please sign up or login with your details

Forgot password? Click here to reset