PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing

07/02/2018
by   Diederik Paul Moeys, et al.
0

Machine vision systems using convolutional neural networks (CNNs) for robotic applications are increasingly being developed. Conventional vision CNNs are driven by camera frames at constant sample rate, thus achieving a fixed latency and power consumption tradeoff. This paper describes further work on the first experiments of a closed-loop robotic system integrating a CNN together with a Dynamic and Active Pixel Vision Sensor (DAVIS) in a predator/prey scenario. The DAVIS, mounted on the predator Summit XL robot, produces frames at a fixed 15 Hz frame-rate and Dynamic Vision Sensor (DVS) histograms containing 5k ON and OFF events at a variable frame-rate ranging from 15-500 Hz depending on the robot speeds. In contrast to conventional frame-based systems, the latency and processing cost depends on the rate of change of the image. The CNN is trained offline on the 1.25h labeled dataset to recognize the position and size of the prey robot, in the field of view of the predator. During inference, combining the ten output classes of the CNN allows extracting the analog position vector of the prey relative to the predator with a mean 8.7 estimation. The system is compatible with conventional deep learning technology, but achieves a variable latency-power tradeoff that adapts automatically to the dynamics. Finally, investigations on the robustness of the algorithm, a human performance comparison and a deconvolution analysis are also explored.

READ FULL TEXT

page 2

page 3

page 4

page 6

research
06/30/2016

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

This paper describes the application of a Convolutional Neural Network (...
research
05/27/2023

ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras

The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehi...
research
05/17/2019

Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification

Deep-learning is a cutting edge theory that is being applied to many fie...
research
05/18/2020

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

Neuromorphic event cameras are useful for dynamic vision problems under ...
research
03/30/2023

Event-based Agile Object Catching with a Quadrupedal Robot

Quadrupedal robots are conquering various indoor and outdoor application...
research
03/13/2018

Dynamic Vision Sensors for Human Activity Recognition

Unlike conventional cameras which capture video at a fixed frame rate, D...
research
05/21/2021

Bringing A Robot Simulator to the SCAMP Vision System

This work develops and demonstrates the integration of the SCAMP-5d visi...

Please sign up or login with your details

Forgot password? Click here to reset