COAD: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking

12/14/2020
by   Bo Chen, et al.
0

Expert finding, a popular service provided by many online websites such as Expertise Finder, LinkedIn, and AMiner, benefits seeking consultants, collaborators, and candidate qualifications. However, its quality is suffered from a single source of support information for experts. This paper employs AMiner, a free online academic search and mining system, having collected more than over 100 million researcher profiles together with 200 million papers from multiple publication databases, as the basis for investigating the problem of expert linking, which aims at linking any external information of persons to experts in AMiner. A critical challenge is how to perform zero shot expert linking without any labeled linkages from the external information to AMiner experts, as it is infeasible to acquire sufficient labels for arbitrary external sources. Inspired by the success of self supervised learning in computer vision and natural language processing, we propose to train a self supervised expert linking model, which is first pretrained by contrastive learning on AMiner data to capture the common representation and matching patterns of experts across AMiner and external sources, and is then fine-tuned by adversarial learning on AMiner and the unlabeled external sources to improve the model transferability. Experimental results demonstrate that COAD significantly outperforms various baselines without contrastive learning of experts on two widely studied downstream tasks: author identification (improving up to 32.1 14.8 indicates the superiority of the proposed adversarial fine-tuning method compared with other domain adaptation ways (improving up to 2.3 HitRatio@1).

READ FULL TEXT

page 1

page 10

research
07/30/2022

Improving Fine-tuning of Self-supervised Models with Contrastive Initialization

Self-supervised learning (SSL) has achieved remarkable performance in pr...
research
12/14/2022

Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders

Deep neural networks have been successfully adopted to diverse domains i...
research
02/12/2021

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Contrastive self-supervised learning (CSL) leverages unlabeled data to t...
research
10/02/2020

Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data

For natural language processing (NLP) tasks such as sentiment or topic c...
research
10/08/2021

RPT: Toward Transferable Model on Heterogeneous Researcher Data via Pre-Training

With the growth of the academic engines, the mining and analysis acquisi...
research
06/01/2022

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pr...
research
03/21/2023

CLIP-ReIdent: Contrastive Training for Player Re-Identification

Sports analytics benefits from recent advances in machine learning provi...

Please sign up or login with your details

Forgot password? Click here to reset