CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation

07/06/2021
by   Minha Kim, et al.
0

Over the last few decades, artificial intelligence research has made tremendous strides, but it still heavily relies on fixed datasets in stationary environments. Continual learning is a growing field of research that examines how AI systems can learn sequentially from a continuous stream of linked data in the same way that biological systems do. Simultaneously, fake media such as deepfakes and synthetic face images have emerged as significant to current multimedia technologies. Recently, numerous method has been proposed which can detect deepfakes with high accuracy. However, they suffer significantly due to their reliance on fixed datasets in limited evaluation settings. Therefore, in this work, we apply continuous learning to neural networks' learning dynamics, emphasizing its potential to increase data efficiency significantly. We propose Continual Representation using Distillation (CoReD) method that employs the concept of Continual Learning (CL), Representation Learning (RL), and Knowledge Distillation (KD). We design CoReD to perform sequential domain adaptation tasks on new deepfake and GAN-generated synthetic face datasets, while effectively minimizing the catastrophic forgetting in a teacher-student model setting. Our extensive experimental results demonstrate that our method is efficient at domain adaptation to detect low-quality deepfakes videos and GAN-generated images from several datasets, outperforming the-state-of-art baseline methods.

READ FULL TEXT
research
05/28/2021

FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning

As GAN-based video and image manipulation technologies become more sophi...
research
09/09/2022

Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning

Continual learning aims to learn a sequence of tasks by leveraging the k...
research
09/06/2023

Rethinking Momentum Knowledge Distillation in Online Continual Learning

Online Continual Learning (OCL) addresses the problem of training neural...
research
06/08/2020

Continual Representation Learning for Biometric Identification

With the explosion of digital data in recent years, continuously learnin...
research
03/08/2022

Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation

Neural networks tend to gradually forget the previously learned knowledg...
research
10/04/2022

Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification

Lifelong object re-identification incrementally learns from a stream of ...
research
09/04/2023

On the Query Strategies for Efficient Online Active Distillation

Deep Learning (DL) requires lots of time and data, resulting in high com...

Please sign up or login with your details

Forgot password? Click here to reset