Continuous Detection, Rapidly React: Unseen Rumors Detection based on Continual Prompt-Tuning

03/16/2022
by   Yuhui Zuo, et al.
0

Since open social platforms allow for a large and continuous flow of unverified information, rumors can emerge unexpectedly and spread quickly. However, existing rumor detection (RD) models often assume the same training and testing distributions and cannot cope with the continuously changing social network environment. This paper proposes a Continual Prompt-Tuning RD (CPT-RD) framework, which avoids catastrophic forgetting of upstream tasks during sequential task learning and enables knowledge transfer between domain tasks. To avoid forgetting, we optimize and store task-special soft-prompt for each domain. Furthermore, we also propose several strategies to transfer knowledge of upstream tasks to deal with emergencies and a task-conditioned prompt-wise hypernetwork (TPHNet) to consolidate past domains, enabling bidirectional knowledge transfer. Finally, CPT-RD is evaluated on English and Chinese RD datasets and is effective and efficient compared to state-of-the-art baselines, without data replay techniques and with only a few parameter tuning.

READ FULL TEXT

page 4

page 8

page 12

research
12/18/2021

Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks

Existing research on continual learning of a sequence of tasks focused o...
research
10/09/2019

Continual Learning Using Bayesian Neural Networks

Continual learning models allow to learn and adapt to new changes and ta...
research
06/01/2022

Transfer without Forgetting

This work investigates the entanglement between Continual Learning (CL) ...
research
06/26/2023

Parameter-Level Soft-Masking for Continual Learning

Existing research on task incremental learning in continual learning has...
research
10/09/2018

Continual State Representation Learning for Reinforcement Learning using Generative Replay

We consider the problem of building a state representation model in a co...
research
07/14/2020

Lifelong Learning using Eigentasks: Task Separation, Skill Acquisition, and Selective Transfer

We introduce the eigentask framework for lifelong learning. An eigentask...
research
10/11/2022

Continual Training of Language Models for Few-Shot Learning

Recent work on applying large language models (LMs) achieves impressive ...

Please sign up or login with your details

Forgot password? Click here to reset