OpenVSLAM: A Versatile Visual SLAM Framework

10/02/2019
by   Shinya Sumikura, et al.
0

In this paper, we introduce OpenVSLAM, a visual SLAM framework with high usability and extensibility. Visual SLAM systems are essential for AR devices, autonomous control of robots and drones, etc. However, conventional open-source visual SLAM frameworks are not appropriately designed as libraries called from third-party programs. To overcome this situation, we have developed a novel visual SLAM framework. This software is designed to be easily used and extended. It incorporates several useful features and functions for research and development. OpenVSLAM is released at https://github.com/xdspacelab/openvslam under the 2-clause BSD license.

READ FULL TEXT

page 1

page 5

research
08/24/2021

DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras

We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SL...
research
02/08/2017

Monocular LSD-SLAM Integration within AR System

In this paper, we cover the process of integrating Large-Scale Direct Si...
research
02/21/2019

GSLAM: A General SLAM Framework and Benchmark

SLAM technology has recently seen many successes and attracted the atten...
research
08/21/2018

SLAMBench2: Multi-Objective Head-to-Head Benchmarking for Visual SLAM

SLAM is becoming a key component of robotics and augmented reality (AR) ...
research
06/04/2023

NICE-SLAM with Adaptive Feature Grids

NICE-SLAM is a dense visual SLAM system that combines the advantages of ...
research
05/17/2023

TextSLAM: Visual SLAM with Semantic Planar Text Features

We propose a novel visual SLAM method that integrates text objects tight...
research
03/22/2021

iRotate: Active Visual SLAM for Omnidirectional Robots

In this letter, we present an active visual SLAM approach for omnidirect...

Please sign up or login with your details

Forgot password? Click here to reset