Referencing between a Head-Mounted Device and Robotic Manipulators

04/04/2019
by   David Puljiz, et al.
0

Having a precise and robust transformation between the robot coordinate system and the AR-device coordinate system is paramount during human-robot interaction (HRI) based on augmented reality using Head mounted displays (HMD), both for intuitive information display and for the tracking of human motions. Most current solutions in this area rely either on the tracking of visual markers, e.g. QR codes, or on manual referencing, both of which provide unsatisfying results. Meanwhile a plethora of object detection and referencing methods exist in the wider robotic and machine vision communities. The precision of the referencing is likewise almost never measured. Here we would like to address this issue by firstly presenting an overview of currently used referencing methods between robots and HMDs. This is followed by a brief overview of object detection and referencing methods used in the field of robotics. Based on these methods we suggest three classes of referencing algorithms we intend to pursue - semi-automatic, on-shot; automatic, one-shot; and automatic continuous. We describe the general workflows of these three classes as well as describing our proposed algorithms in each of these classes. Finally we present the first experimental results of a semi-automatic referencing algorithm, tested on an industrial KUKA KR-5 manipulator.

READ FULL TEXT

page 1

page 4

page 5

page 7

research
10/10/2019

Concepts for End-to-end Augmented Reality based Human-Robot Interaction Systems

The field of Augmented Reality (AR) based Human Robot Interaction (HRI) ...
research
04/07/2022

Updating Industrial Robots for Emerging Technologies

Industrial arms need to evolve beyond their standard shape to embrace ne...
research
03/28/2023

Inside-out Infrared Marker Tracking via Head Mounted Displays for Smart Robot Programming

Intuitive robot programming through use of tracked smart input devices r...
research
08/05/2019

Semi-Automatic Labeling for Deep Learning in Robotics

In this paper, we propose Augmented Reality Semi-automatic labeling (ARS...
research
10/20/2021

CobotAR: Interaction with Robots using Omnidirectionally Projected Image and DNN-based Gesture Recognition

Several technological solutions supported the creation of interfaces for...
research
06/06/2023

Real-Time Onboard Object Detection for Augmented Reality: Enhancing Head-Mounted Display with YOLOv8

This paper introduces a software architecture for real-time object detec...
research
05/17/2022

Towards Robotic Laboratory Automation Plug Play: Teaching-free Robot Integration with the LAPP Digital Twin

The Laboratory Automation Plug Play (LAPP) framework is a high-level...

Please sign up or login with your details

Forgot password? Click here to reset