Neuromorphic Camera Denoising using Graph Neural Network-driven Transformers

12/17/2021
by   Yusra Alkendi, et al.
10

Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer-vision community and is serving as a key-enabler for a multitude of applications. This technology has offered significant advantages including reduced power consumption, reduced processing needs, and communication speed-ups. However, neuromorphic cameras suffer from significant amounts of measurement noise. This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms. In this paper, we propose a novel noise filtration algorithm to eliminate events which do not represent real log-intensity variations in the observed scene. We employ a Graph Neural Network (GNN)-driven transformer algorithm, called GNN-Transformer, to classify every active event pixel in the raw stream into real-log intensity variation or noise. Within the GNN, a message-passing framework, called EventConv, is carried out to reflect the spatiotemporal correlation among the events, while preserving their asynchronous nature. We also introduce the Known-object Ground-Truth Labeling (KoGTL) approach for generating approximate ground truth labels of event streams under various illumination conditions. KoGTL is used to generate labeled datasets, from experiments recorded in challenging lighting conditions. These datasets are used to train and extensively test our proposed algorithm. When tested on unseen datasets, the proposed algorithm outperforms existing methods by 12 terms of filtration accuracy. Additional tests are also conducted on publicly available datasets to demonstrate the generalization capabilities of the proposed algorithm in the presence of illumination variations and different motion dynamics. Compared to existing solutions, qualitative results verified the superior capability of the proposed algorithm to eliminate noise while preserving meaningful scene events.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

page 12

page 14

research
03/18/2020

Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras

This paper presents a novel method for labeling real-world neuromorphic ...
research
03/18/2020

Event Probability Mask (EPM) and Event Denoising Convolutional NeuralNetwork (EDnCNN) for Neuromorphic Cameras

This paper presents a novel method for labeling real-world neuromorphic ...
research
09/27/2017

Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion

In recent years, dynamic vision sensors (DVS), also known as event-based...
research
08/01/2023

On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing

An event-based camera outputs an event whenever a change in scene bright...
research
04/11/2022

Event Transformer

The event camera is a bio-vision inspired camera with high dynamic range...
research
08/19/2019

Graph-Based Object Classification for Neuromorphic Vision Sensing

Neuromorphic vision sensing (NVS) devices represent visual information a...
research
05/05/2023

Asynchronous Events-based Panoptic Segmentation using Graph Mixer Neural Network

In the context of robotic grasping, object segmentation encounters sever...

Please sign up or login with your details

Forgot password? Click here to reset