Adversarial and Contrastive Variational Autoencoder for Sequential Recommendation

03/19/2021
by   Zhe Xie, et al.
0

Sequential recommendation as an emerging topic has attracted increasing attention due to its important practical significance. Models based on deep learning and attention mechanism have achieved good performance in sequential recommendation. Recently, the generative models based on Variational Autoencoder (VAE) have shown the unique advantage in collaborative filtering. In particular, the sequential VAE model as a recurrent version of VAE can effectively capture temporal dependencies among items in user sequence and perform sequential recommendation. However, VAE-based models suffer from a common limitation that the representational ability of the obtained approximate posterior distribution is limited, resulting in lower quality of generated samples. This is especially true for generating sequences. To solve the above problem, in this work, we propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation. Specifically, we first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes (AVB) framework, which enables our model to generate high-quality latent variables. Then, we employ the contrastive loss. The latent variables will be able to learn more personalized and salient characteristics by minimizing the contrastive loss. Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence. Finally, we conduct extensive experiments on four real-world datasets. The experimental results show that our proposed ACVAE model outperforms other state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2019

Neural Gaussian Copula for Variational Autoencoder

Variational language models seek to estimate the posterior of latent var...
research
04/01/2021

WakaVT: A Sequential Variational Transformer for Waka Generation

Poetry generation has long been a challenge for artificial intelligence....
research
02/12/2019

Contrastive Variational Autoencoder Enhances Salient Features

Variational autoencoders are powerful algorithms for identifying dominan...
research
02/06/2019

BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling

With the introduction of the variational autoencoder (VAE), probabilisti...
research
02/21/2022

Moment Matching Deep Contrastive Latent Variable Models

In the contrastive analysis (CA) setting, machine learning practitioners...
research
08/10/2021

Regularized Sequential Latent Variable Models with Adversarial Neural Networks

The recurrent neural networks (RNN) with richly distributed internal sta...
research
05/02/2019

Deep Generative Models for Sparse, High-dimensional, and Overdispersed Discrete Data

Many applications, such as text modelling, high-throughput sequencing, a...

Please sign up or login with your details

Forgot password? Click here to reset