S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning

07/26/2022
by   Yabin Wang, et al.
0

State-of-the-art deep neural networks are still struggling to address the catastrophic forgetting problem in continual learning. In this paper, we propose one simple paradigm (named as S-Prompting) and two concrete approaches to highly reduce the forgetting degree in one of the most typical continual learning scenarios, i.e., domain increment learning (DIL). The key idea of the paradigm is to learn prompts independently across domains with pre-trained transformers, avoiding the use of exemplars that commonly appear in conventional methods. This results in a win-win game where the prompting can achieve the best for each domain. The independent prompting across domains only requests one single cross-entropy loss for training and one simple K-NN operation as a domain identifier for inference. The learning paradigm derives an image prompt learning approach and a brand-new language-image prompt learning approach. Owning an excellent scalability (0.03 per domain), the best of our approaches achieves a remarkable relative improvement (an average of about 30 exemplar-free methods for three standard DIL tasks, and even surpasses the best of them relatively by about 6

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset