Where to Look Next: Unsupervised Active Visual Exploration on 360° Input

09/23/2019
by   Soroush Seifi, et al.
0

We address the problem of active visual exploration of large 360 inputs. In our setting an active agent with a limited camera bandwidth explores its 360 environment by changing its viewing direction at limited discrete time steps. As such, it observes the world as a sequence of narrow field-of-view 'glimpses', deciding for itself where to look next. Our proposed method exceeds previous works' performance by a significant margin without the need for deep reinforcement learning or training separate networks as sidekicks. A key component of our system are the spatial memory maps that make the system aware of the glimpses' orientations (locations in the 360 image). Further, we stress the advantages of retina-like glimpses when the agent's sensor bandwidth and time-steps are limited. Finally, we use our trained model to do classification of the whole scene using only the information observed in the glimpses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset