Towards Large Scale Training Of Autoencoders For Collaborative Filtering

08/30/2018
by   Abdallah Moussawi, et al.
0

In this paper, we apply a mini-batch based negative sampling method to efficiently train a latent factor autoencoder model on large scale and sparse data for implicit feedback collaborative filtering. We compare our work against a state-of-the-art baseline model on different experimental datasets and show that this method can lead to a good and fast approximation of the baseline model performance.

READ FULL TEXT
research
12/10/2019

Deep Latent Factor Model for Collaborative Filtering

Latent factor models have been used widely in collaborative filtering ba...
research
01/03/2023

Multidimensional Item Response Theory in the Style of Collaborative Filtering

This paper presents a machine learning approach to multidimensional item...
research
01/18/2015

Deep Belief Nets for Topic Modeling

Applying traditional collaborative filtering to digital publishing is ch...
research
10/26/2014

A Ternary Non-Commutative Latent Factor Model for Scalable Three-Way Real Tensor Completion

Motivated by large-scale Collaborative-Filtering applications, we presen...
research
03/02/2016

Hybrid Collaborative Filtering with Autoencoders

Collaborative Filtering aims at exploiting the feedback of users to prov...
research
09/26/2013

One-class Collaborative Filtering with Random Graphs: Annotated Version

The bane of one-class collaborative filtering is interpreting and modell...
research
09/26/2021

SimpleX: A Simple and Strong Baseline for Collaborative Filtering

Collaborative filtering (CF) is a widely studied research topic in recom...

Please sign up or login with your details

Forgot password? Click here to reset