DARE: AI-based Diver Action Recognition System using Multi-Channel CNNs for AUV Supervision

11/16/2020
by   Jing Yang, et al.
13

With the growth of sensing, control and robotic technologies, autonomous underwater vehicles (AUVs) have become useful assistants to human divers for performing various underwater operations. In the current practice, the divers are required to carry expensive, bulky, and waterproof keyboards or joystick-based controllers for supervision and control of AUVs. Therefore, diver action-based supervision is becoming increasingly popular because it is convenient, easier to use, faster, and cost effective. However, the various environmental, diver and sensing uncertainties present underwater makes it challenging to train a robust and reliable diver action recognition system. In this regard, this paper presents DARE, a diver action recognition system, that is trained based on Cognitive Autonomous Driving Buddy (CADDY) dataset, which is a rich set of data containing images of different diver gestures and poses in several different and realistic underwater environments. DARE is based on fusion of stereo-pairs of camera images using a multi-channel convolutional neural network supported with a systematically trained tree-topological deep neural network classifier to enhance the classification performance. DARE is fast and requires only a few milliseconds to classify one stereo-pair, thus making it suitable for real-time underwater implementation. DARE is comparatively evaluated against several existing classifier architectures and the results show that DARE supersedes the performance of all classifiers for diver action recognition in terms of overall as well as individual class accuracies and F1-scores.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

research
09/26/2017

Gesture-based Human-robot Interaction for Field Programmable Autonomous Underwater Robots

The uncertainty and variability of underwater environment propose the re...
research
07/12/2018

CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities

In this article we present a novel underwater dataset collected from sev...
research
04/03/2018

Robotic Detection of Marine Litter Using Deep Visual Detection Models

Trash deposits in aquatic environments have a destructive effect on mari...
research
08/02/2020

Vision and Inertial Sensing Fusion for Human Action Recognition : A Review

Human action recognition is used in many applications such as video surv...
research
09/30/2022

Application-Driven AI Paradigm for Human Action Recognition

Human action recognition in computer vision has been widely studied in r...
research
10/29/2018

ActionXPose: A Novel 2D Multi-view Pose-based Algorithm for Real-time Human Action Recognition

We present ActionXPose, a novel 2D pose-based algorithm for posture-leve...
research
10/16/2018

Robust Gesture-Based Communication for Underwater Human-Robot Interaction in the context of Search and Rescue Diver Missions

We propose a robust gesture-based communication pipeline for divers to i...

Please sign up or login with your details

Forgot password? Click here to reset