DeepAI AI Chat
Log In Sign Up

ROW-SLAM: Under-Canopy Cornfield Semantic SLAM

by   Jiacheng Yuan, et al.
University of Minnesota

We study a semantic SLAM problem faced by a robot tasked with autonomous weeding under the corn canopy. The goal is to detect corn stalks and localize them in a global coordinate frame. This is a challenging setup for existing algorithms because there is very little space between the camera and the plants, and the camera motion is primarily restricted to be along the row. To overcome these challenges, we present a multi-camera system where a side camera (facing the plants) is used for detection whereas front and back cameras are used for motion estimation. Next, we show how semantic features in the environment (corn stalks, ground, and crop planes) can be used to develop a robust semantic SLAM solution and present results from field trials performed throughout the growing season across various cornfields.


page 1

page 2

page 3

page 5

page 6


High-resolution Ecosystem Mapping in Repetitive Environments Using Dual Camera SLAM

Structure from Motion (SfM) techniques are being increasingly used to cr...

Learning Rolling Shutter Correction from Real Data without Camera Motion Assumption

The rolling shutter mechanism in modern cameras generates distortions as...

Asynchronous Multi-View SLAM

Existing multi-camera SLAM systems assume synchronized shutters for all ...

Active Visual SLAM with independently rotating camera

In active Visual-SLAM (V-SLAM), a robot relies on the information retrie...

InterpolationSLAM: A Novel Robust Visual SLAM System in Rotational Motion

In recent years, visual SLAM has achieved great progress and development...

Redesigning SLAM for Arbitrary Multi-Camera Systems

Adding more cameras to SLAM systems improves robustness and accuracy but...