EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow, Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with Monocular or Stereo Algorithms

05/06/2022
by   Levi Burner, et al.
0

A new event camera dataset, EVIMO2, is introduced that improves on the popular EVIMO dataset by providing more data, from better cameras, in more complex scenarios. As with its predecessor, EVIMO2 provides labels in the form of per-pixel ground truth depth and segmentation as well as camera and object poses. All sequences use data from physical cameras and many sequences feature multiple independently moving objects. Typically, such labeled data is unavailable in physical event camera datasets. Thus, EVIMO2 will serve as a challenging benchmark for existing algorithms and rich training set for the development of new algorithms. In particular, EVIMO2 is suited for supporting research in motion and object segmentation, optical flow, structure from motion, and visual (inertial) odometry in both monocular or stereo configurations. EVIMO2 consists of 41 minutes of data from three 640×480 event cameras, one 2080×1552 classical color camera, inertial measurements from two six axis inertial measurement units, and millimeter accurate object poses from a Vicon motion capture system. The dataset's 173 sequences are arranged into three categories. 3.75 minutes of independently moving household objects, 22.55 minutes of static scenes, and 14.85 minutes of basic motions in shallow scenes. Some sequences were recorded in low-light conditions where conventional cameras fail. Depth and segmentation are provided at 60 Hz for the event cameras and 30 Hz for the classical camera. The masks can be regenerated using open-source code up to rates as high as 200 Hz. This technical report briefly describes EVIMO2. The full documentation is available online. Videos of individual sequences can be sampled on the download page.

READ FULL TEXT

page 1

page 2

page 3

research
03/18/2019

EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras

We present the first event-based learning approach for motion segmentati...
research
02/13/2023

A Neuromorphic Dataset for Object Segmentation in Indoor Cluttered Environment

Taking advantage of an event-based camera, the issues of motion blur, lo...
research
07/24/2018

A Synchronized Stereo and Plenoptic Visual Odometry Dataset

We present a new dataset to evaluate monocular, stereo, and plenoptic ca...
research
09/25/2022

PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with Point and Line Features

Event cameras are motion-activated sensors that capture pixel-level illu...
research
12/21/2016

Learning Motion Patterns in Videos

The problem of determining whether an object is in motion, irrespective ...
research
02/11/2021

VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments

Dynamic environments such as urban areas are still challenging for popul...
research
09/13/2021

Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry

We present a multi-camera visual-inertial odometry system based on facto...

Please sign up or login with your details

Forgot password? Click here to reset