CL4CTR: A Contrastive Learning Framework for CTR Prediction

12/01/2022
by   Fangye Wang, et al.
0

Many Click-Through Rate (CTR) prediction works focused on designing advanced architectures to model complex feature interactions but neglected the importance of feature representation learning, e.g., adopting a plain embedding layer for each feature, which results in sub-optimal feature representations and thus inferior CTR prediction performance. For instance, low frequency features, which account for the majority of features in many CTR tasks, are less considered in standard supervised learning settings, leading to sub-optimal feature representations. In this paper, we introduce self-supervised learning to produce high-quality feature representations directly and propose a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals to regularize the feature representation learning: contrastive loss, feature alignment, and field uniformity. The contrastive module first constructs positive feature pairs by data augmentation and then minimizes the distance between the representations of each positive feature pair by the contrastive loss. The feature alignment constraint forces the representations of features from the same field to be close, and the field uniformity constraint forces the representations of features from different fields to be distant. Extensive experiments verify that CL4CTR achieves the best performance on four datasets and has excellent effectiveness and compatibility with various representative baselines.

READ FULL TEXT
research
04/26/2021

Mutual Contrastive Learning for Visual Representation Learning

We present a collaborative learning method called Mutual Contrastive Lea...
research
08/03/2023

MAP: A Model-agnostic Pretraining Framework for Click-through Rate Prediction

With the widespread application of personalized online services, click-t...
research
06/08/2023

Contrastive Representation Disentanglement for Clustering

Clustering continues to be a significant and challenging task. Recent st...
research
08/24/2019

Representation Learning with Autoencoders for Electronic Health Records: A Comparative Study

Increasing volume of Electronic Health Records (EHR) in recent years pro...
research
08/16/2021

Efficient Feature Representations for Cricket Data Analysis using Deep Learning based Multi-Modal Fusion Model

Data analysis has become a necessity in the modern era of cricket. Every...
research
08/31/2020

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

Contrastive self-supervised learning (CSL) is an approach to learn usefu...
research
10/18/2022

Rethinking Prototypical Contrastive Learning through Alignment, Uniformity and Correlation

Contrastive self-supervised learning (CSL) with a prototypical regulariz...

Please sign up or login with your details

Forgot password? Click here to reset