Adaptive Feature Fusion Network for Gaze Tracking in Mobile Tablets

03/20/2021
by   Yiwei Bao, et al.
0

Recently, many multi-stream gaze estimation methods have been proposed. They estimate gaze from eye and face appearances and achieve reasonable accuracy. However, most of the methods simply concatenate the features extracted from eye and face appearance. The feature fusion process has been ignored. In this paper, we propose a novel Adaptive Feature Fusion Network (AFF-Net), which performs gaze tracking task in mobile tablets. We stack two-eye feature maps and utilize Squeeze-and-Excitation layers to adaptively fuse two-eye features according to their similarity on appearance. Meanwhile, we also propose Adaptive Group Normalization to recalibrate eye features with the guidance of facial feature. Extensive experiments on both GazeCapture and MPIIFaceGaze datasets demonstrate consistently superior performance of the proposed method.

READ FULL TEXT

page 3

page 7

research
01/01/2020

A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation

Human gaze is essential for various appealing applications. Aiming at mo...
research
11/15/2017

Robust Real-Time Multi-View Eye Tracking

Despite significant advances in improving the gaze estimation accuracy u...
research
03/18/2019

Appearance-Based Gaze Estimation Using Dilated-Convolutions

Appearance-based gaze estimation has attracted more and more attention b...
research
05/22/2023

Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation

Appearance-based gaze estimation has been actively studied in recent yea...
research
04/12/2023

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Recent work in XAI for eye tracking data has evaluated the suitability o...
research
12/05/2021

Improving Intention Detection in Single-Trial Classification through Fusion of EEG and Eye-tracker Data

Intention decoding is an indispensable procedure in hands-free human-com...
research
05/24/2019

Overt visual attention on rendered 3D objects

This work covers multiple aspects of overt visual attention on 3D render...

Please sign up or login with your details

Forgot password? Click here to reset