Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation

11/25/2018
by   Shiming Ge, et al.
0

Typically, the deployment of face recognition models in the wild needs to identify low-resolution faces with extremely low computational cost. To address this problem, a feasible solution is compressing a complex face model to achieve higher speed and lower memory at the cost of minimal performance drop. Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation. In this approach, a two-stream convolutional neural network (CNN) is first initialized to recognize high-resolution faces and resolution-degraded faces with a teacher stream and a student stream, respectively. The teacher stream is represented by a complex CNN for high-accuracy recognition, and the student stream is represented by a much simpler CNN for low-complexity recognition. To avoid significant performance drop at the student stream, we then selectively distil the most informative facial features from the teacher stream by solving a sparse graph optimization problem, which are then used to regularize the fine-tuning process of the student stream. In this way, the student stream is actually trained by simultaneously handling two tasks with limited computational resources: approximating the most informative facial cues via feature regression, and recovering the missing facial cues via low-resolution face classification. Experimental results show that the student stream performs impressively in recognizing low-resolution faces and costs only 0.15MB memory and runs at 418 faces per second on CPU and 9,433 faces per second on GPU.

READ FULL TEXT

page 1

page 2

page 9

research
09/29/2022

Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition

Deep learning has achieved outstanding performance for face recognition ...
research
06/03/2019

Deep Face Recognition Model Compression via Knowledge Transfer and Distillation

Fully convolutional networks (FCNs) have become de facto tool to achieve...
research
05/26/2019

Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and Residual Knowledge Distillation

Recent deep learning based face recognition methods have achieved great ...
research
11/20/2021

Teacher-Student Training and Triplet Loss to Reduce the Effect of Drastic Face Occlusion

We study a series of recognition tasks in two realistic scenarios requir...
research
08/18/2023

CCFace: Classification Consistency for Low-Resolution Face Recognition

In recent years, deep face recognition methods have demonstrated impress...
research
04/09/2019

Ultrafast Video Attention Prediction with Coupled Knowledge Distillation

Large convolutional neural network models have recently demonstrated imp...
research
08/03/2020

Teacher-Student Training and Triplet Loss for Facial Expression Recognition under Occlusion

In this paper, we study the task of facial expression recognition under ...

Please sign up or login with your details

Forgot password? Click here to reset