ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images

12/07/2021
by   Binh M. Le, et al.
14

Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Because of the limited information in low-quality images, detecting low-quality deepfake remains an important challenge. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher's and student's tensors under different views to transfer the teacher tensor's distribution to the student more efficiently. Our extensive experimental results demonstrate that our approach outperforms state-of-the-art baselines in detecting low-quality compressed deepfake images.

READ FULL TEXT

page 2

page 3

page 13

page 14

research
01/21/2021

Collaborative Teacher-Student Learning via Multiple Knowledge Transfer

Knowledge distillation (KD), as an efficient and effective model compres...
research
10/18/2022

On effects of Knowledge Distillation on Transfer Learning

Knowledge distillation is a popular machine learning technique that aims...
research
03/08/2022

PyNET-QxQ: A Distilled PyNET for QxQ Bayer Pattern Demosaicing in CMOS Image Sensor

The deep learning-based ISP models for mobile cameras produce high-quali...
research
08/22/2022

Multi-View Attention Transfer for Efficient Speech Enhancement

Recent deep learning models have achieved high performance in speech enh...
research
02/22/2021

Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

Neural dialogue models suffer from low-quality responses when interacted...
research
03/08/2019

Everything old is new again: A multi-view learning approach to learning using privileged information and distillation

We adopt a multi-view approach for analyzing two knowledge transfer sett...
research
01/31/2022

Deep-Disaster: Unsupervised Disaster Detection and Localization Using Visual Data

Social media plays a significant role in sharing essential information, ...

Please sign up or login with your details

Forgot password? Click here to reset