Teacher-Student Asynchronous Learning with Multi-Source Consistency for Facial Landmark Detection

12/12/2020
by   Rongye Meng, et al.
3

Due to the high annotation cost of large-scale facial landmark detection tasks in videos, a semi-supervised paradigm that uses self-training for mining high-quality pseudo-labels to participate in training has been proposed by researchers. However, self-training based methods often train with a gradually increasing number of samples, whose performances vary a lot depending on the number of pseudo-labeled samples added. In this paper, we propose a teacher-student asynchronous learning (TSAL) framework based on the multi-source supervision signal consistency criterion, which implicitly mines pseudo-labels through consistency constraints. Specifically, the TSAL framework contains two models with exactly the same structure. The radical student uses multi-source supervision signals from the same task to update parameters, while the calm teacher uses a single-source supervision signal to update parameters. In order to reasonably absorb student's suggestions, teacher's parameters are updated again through recursive average filtering. The experimental results prove that asynchronous-learning framework can effectively filter noise in multi-source supervision signals, thereby mining the pseudo-labels which are more significant for network parameter updating. And extensive experiments on 300W, AFLW, and 300VW benchmarks show that the TSAL framework achieves state-of-the-art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
08/06/2019

Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection

Facial landmark detection aims to localize the anatomically defined poin...
research
09/15/2022

Learning from Future: A Novel Self-Training Framework for Semantic Segmentation

Self-training has shown great potential in semi-supervised learning. Its...
research
05/15/2022

Meta Self-Refinement for Robust Learning with Weak Supervision

Training deep neural networks (DNNs) with weak supervision has been a ho...
research
09/15/2021

Self-Training with Differentiable Teacher

Self-training achieves enormous success in various semi-supervised and w...
research
06/14/2021

Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition

In this paper, we introduce the Kaizen framework that uses a continuousl...
research
10/23/2020

A Teacher-Student Framework for Semi-supervised Medical Image Segmentation From Mixed Supervision

Standard segmentation of medical images based on full-supervised convolu...
research
10/15/2020

Unsupervised Self-training Algorithm Based on Deep Learning for Optical Aerial Images Change Detection

Optical aerial images change detection is an important task in earth obs...

Please sign up or login with your details

Forgot password? Click here to reset