Identifying Most Walkable Direction for Navigation in an Outdoor Environment

11/21/2017 ∙ by Sachin Mehta, et al. ∙ 0

We present an approach for identifying the most walkable direction for navigation using a hand-held camera. Our approach extracts semantically rich contextual information from the scene using a custom encoder-decoder architecture for semantic segmentation and models the spatial and temporal behavior of objects in the scene using a spatio-temporal graph. The system learns to minimize a cost function over the spatial and temporal object attributes to identify the most walkable direction. We construct a new annotated navigation dataset collected using a hand-held mobile camera in an unconstrained outdoor environment, which includes challenging settings such as highly dynamic scenes, occlusion between objects, and distortions. Our system achieves an accuracy of 84 our custom segmentation network is both fast and accurate, achieving mIOU (mean intersection over union) scores of 81 and 44.7 on the PASCAL VOC and the PASCAL Context datasets, respectively, while running at about 21 frames per second.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.