Representation Learning by Ranking under multiple tasks

03/28/2021
by   Lifeng Gu, et al.
0

In recent years, representation learning has become the research focus of the machine learning community. Large-scale pre-training neural networks have become the first step to realize general intelligence. The key to the success of neural networks lies in their abstract representation capabilities for data. Several learning fields are actually discussing how to learn representations and there lacks a unified perspective. We convert the representation learning problem under multiple tasks into a ranking problem, taking the ranking problem as a unified perspective, the representation learning under different tasks is solved by optimizing the approximate NDCG loss. Experiments under different learning tasks like classification, retrieval, multi-label learning, regression, self-supervised learning prove the superiority of approximate NDCG loss. Further, under the self-supervised learning task, the training data is transformed by data augmentation method to improve the performance of the approximate NDCG loss, which proves that the approximate NDCG loss can make full use of the information of the unsupervised training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2022

Multi-Augmentation for Efficient Visual Representation Learning for Self-supervised Pre-training

In recent years, self-supervised learning has been studied to deal with ...
research
01/25/2019

Revisiting Self-Supervised Visual Representation Learning

Unsupervised visual representation learning remains a largely unsolved p...
research
06/15/2021

Multivariate Business Process Representation Learning utilizing Gramian Angular Fields and Convolutional Neural Networks

Learning meaningful representations of data is an important aspect of ma...
research
02/02/2022

AtmoDist: Self-supervised Representation Learning for Atmospheric Dynamics

Representation learning has proven to be a powerful methodology in a wid...
research
10/08/2021

SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning

Self-supervised learning has been shown to be very effective in learning...
research
08/25/2023

AtmoRep: A stochastic model of atmosphere dynamics using large scale representation learning

The atmosphere affects humans in a multitude of ways, from loss of life ...
research
10/16/2020

For self-supervised learning, Rationality implies generalization, provably

We prove a new upper bound on the generalization gap of classifiers that...

Please sign up or login with your details

Forgot password? Click here to reset