Towards Interpretable and Robust Hand Detection via Pixel-wise Prediction

01/13/2020
by   Dan Liu, et al.
13

The lack of interpretability of existing CNN-based hand detection methods makes it difficult to understand the rationale behind their predictions. In this paper, we propose a novel neural network model, which introduces interpretability into hand detection for the first time. The main improvements include: (1) Detect hands at pixel level to explain what pixels are the basis for its decision and improve transparency of the model. (2) The explainable Highlight Feature Fusion block highlights distinctive features among multiple layers and learns discriminative ones to gain robust performance. (3) We introduce a transparent representation, the rotation map, to learn rotation features instead of complex and non-transparent rotation and derotation layers. (4) Auxiliary supervision accelerates the training process, which saves more than 10 hours in our experiments. Experimental results on the VIVA and Oxford hand detection and tracking datasets show competitive accuracy of our method compared with state-of-the-art methods with higher speed.

READ FULL TEXT

page 11

page 24

page 27

page 28

research
06/11/2019

Scale Invariant Fully Convolutional Network: Detecting Hands Efficiently

Existing hand detection methods usually follow the pipeline of multiple ...
research
09/27/2018

CNN Based Posture-Free Hand Detection

Although many studies suggest high performance hand detection methods, t...
research
12/08/2016

Joint Hand Detection and Rotation Estimation by Using CNN

Hand detection is essential for many hand related tasks, e.g. parsing ha...
research
04/01/2022

Explainable and Interpretable Diabetic Retinopathy Classification Based on Neural-Symbolic Learning

In this paper, we propose an explainable and interpretable diabetic reti...
research
12/03/2020

D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and Localization

Recently, many detection methods based on convolutional neural networks ...
research
10/29/2021

Exposing Deepfake with Pixel-wise AR and PPG Correlation from Faint Signals

Deepfake poses a serious threat to the reliability of judicial evidence ...
research
02/25/2021

Do Input Gradients Highlight Discriminative Features?

Interpretability methods that seek to explain instance-specific model pr...

Please sign up or login with your details

Forgot password? Click here to reset