Emergence of Exploratory Look-Around Behaviors through Active Observation Completion

06/27/2019 ∙ by Santhosh K. Ramakrishnan, et al. ∙ The University of Texas at Austin 1

Standard computer vision systems assume access to intelligently captured inputs (e.g., photos from a human photographer), yet autonomously capturing good observations is a major challenge in itself. We address the problem of learning to look around: how can an agent learn to acquire informative visual observations? We propose a reinforcement learning solution, where the agent is rewarded for reducing its uncertainty about the unobserved portions of its environment. Specifically, the agent is trained to select a short sequence of glimpses after which it must infer the appearance of its full environment. To address the challenge of sparse rewards, we further introduce sidekick policy learning, which exploits the asymmetry in observability between training and test time. The proposed methods learn observation policies that not only perform the completion task for which they are trained, but also generalize to exhibit useful "look-around" behavior for a range of active perception tasks.



There are no comments yet.


page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Visual recognition has witnessed dramatic successes in recent years. Fueled by benchmarks composed of carefully curated Web photos and videos, the focus has been on inferring semantic labels from human-captured images—whether classifying scenes, detecting objects, or recognizing activities 

[1, 2, 3]. However, visual perception requires not only making inferences from observations, but also making decisions about what to observe. Methods that use human-captured images implicitly assume properties in their inputs, such as canonical poses of objects, no motion blur, or ideal lighting conditions. As a result, they gloss over important hurdles for robotic agents acting in the real world.

For an agent, individual views of an environment offer only a small fraction of all relevant information. For instance, an agent with a view of a television screen in front of it may not know if it is in a living room or a bedroom. An agent observing a mug from the side may have to move to see it from above to know what is inside.

Figure 1: Looking around efficiently is a complex task requiring the ability to reason about regularities in the visual world using cues like context and geometry. Top: An agent that has observed limited portions of its environment can reasonably predict some unobserved portions (e.g., water near the ship), but is much more uncertain about other portions. Where should it look next? Bottom: An agent inspecting a 3D object. Having seen a top view and a side view, how must it rotate the mug now to get maximum new information? Critically, we aim to learn policies that are not specific to a given object or scene, nor to a specific perception task. Instead, the look-around policies ought to benefit the agent exploring new, unseen environments and performing tasks unspecified when learning the look-around behavior.

An agent ought to be able to enter a new environment or pick up a new object and intelligently (non-exhaustively) “look around”. The ability to actively explore would be valuable in both task-driven scenarios (e.g., a drone searches for signs of a particular activity) as well as scenarios where the task itself unfolds simultaneously with the agent’s exploratory actions (e.g., a search-and-rescue robot enters a burning building and dynamically decides its mission). For example, consider a service robot that is moving around in an open environment without specific goals waiting for future tasks like delivering a package from one person to another or picking up coffee from the kitchen. It needs to efficiently gather information constantly so that it is well-prepared to perform future tasks with minimal delays. Similarly, consider a search and rescue scenario, where a robot is deployed in a hostile environment such as a burning building or earthquake collapse where time is of the essence. The robot has to adapt to such new unseen environments and rapidly gather information that other robots and humans can use to effectively respond to tasks that dynamically unfold over time (humans caught under debris, locations of fires, presence of hazardous materials). Having a robot that knows how to explore intelligently can be critical in such scenarios as it reduces risks for people while providing an effective response.

Any such scenario brings forth the question of how to collect visual information to benefit perception. A naïve strategy would be to gain full information by making every possible observation—that is, looking around in all directions, or systematically examining all sides of an object. However, observing all aspects is often inconvenient if not intractable. Fortunately, in practice not all views are equally informative. The natural visual world contains regularities, suggesting not every view needs to be sampled for accurate perception. For instance, humans rarely need to fully observe an object to understand its 3D shape [4, 5, 6], and one can often understand the primary contents of a room without literally scanning it [7]. In short, given a set of past observations, some new views are more informative than others. See Figure 1.

This leads us to investigate the question of how to effectively look around: how can a learning system make intelligent decisions about how to acquire new exploratory visual observations? We propose a solution based on “active observation completion”: an agent must actively observe a small fraction of its environment so that it can predict the pixelwise appearances of unseen portions of the environment.

Our problem setting relates to but is distinct from prior work in active perception, intrinsic motivation, and view synthesis. While there is interesting recent headway in active object recognition [8, 9, 10, 11] and intelligent search mechanisms for detection [12, 13, 14], such systems are supervised and task-specific—limited to accelerating a pre-defined recognition task. In reinforcement learning, intrinsic motivation methods define generic rewards such as novelty or coverage [15, 16, 17], that encourage exploration for navigation agents, but they do not self-supervise policy learning in an observed visual environment, nor do they examine transfer beyond navigation tasks. View synthesis approaches use limited views of the environment along with geometric properties to generate unseen views [18, 19, 20, 21, 22]. Whereas these methods assume individual human-captured images, our problem requires actively selecting the input views themselves. Our primary goal is not to synthesize unseen views, but rather to use novel view inference as a means to elicit intelligent exploration policies that transfer well to other tasks.

In the following, we first formally define the learning task, overview our approach, and present results. Then after the results, we discuss limitations of the current approach and key future directions, followed by the materials and methods—an overview of the specific deep networks and policy learning approaches we develop. This article expands upon our two previous conference papers [23, 24].

Active observation completion

Our goal is to learn a policy for controlling an agent’s camera motions such that it can explore novel environments and objects efficiently. To this end, we formulate an unsupervised learning objective based on active observation completion. The main idea is to favor sequences of camera motions that will make the unseen parts of the agent’s surroundings easier to predict. The output is a look-around policy equipped to gather new images in new environments. As we will demonstrate in results, it prepares the agent to perform intelligent exploration for a wide range of perception tasks, such as recognition, light source localization, and pose estimation.

Problem formulation

The problem setting is formally stated as follows. The agent starts by looking at a novel environment (or object) from some unknown viewpoint [54]. It has a budget of time to explore the environment. The learning objective is to minimize the error in the agent’s pixelwise reconstruction of the full—mostly unobserved—environment using only the sequence of views selected within that budget. In order to do this, the agent must maintain an internal representation of how the environment would look conditioned on the views it has seen so far.

We represent the entire environment as a “viewgrid” containing views from a discrete set of viewpoints. To do this, we evenly sample elevations from to and azimuths from to and form all possible (elevation, azimuth) pairings. The viewgrid is then denoted by , where is the 2D view of from viewpoint which is the pairing. More generally, could capture both camera angles and position; however, to best exploit existing datasets, we limit our experiments to camera rotations alone with no translation movements.

The agent expends its time budget in discrete increments by selecting camera motions in sequence. Each camera motion comprises an actively chosen “glimpse”. At each time step, the agent gets an image observation from the current viewpoint. It then makes an exploratory motion () based on its policy . When the agent executes action , the viewpoint changes according to . For each camera motion executed by the agent, a reward is provided by the environment. Using the view , the agent updates its internal representation of the environment, denoted . Because camera motions are restricted to have proximity to the current camera angle and candidate viewpoints partially overlap, the discrete action space promotes efficiency without neglecting the physical realities of the problem (following [9, 8, 23, 25]). During training, the full viewgrids of the environments are available to the agent as supervision. During testing, the system must predict the complete viewgrid, having seen only a few views within it.

We explore our idea in two settings. See Figure 1. In the first, the agent scans a scene through its limited field-of-view camera; the goal is to select efficient camera motions so that after a few glimpses, it can model unobserved portions of the scene well. In the second, the agent manipulates a 3D object to inspect it; the goal is to select efficient manipulations so that after only a small number of actions, it has a full model of the object’s 3D shape. In both cases, the system must learn to leverage visual regularities (shape primitives, context, etc.) that suggest the likely contents of unseen views, focusing on portions that are hard to “hallucinate” (i.e., predict pixelwise).

Posing the active view acquisition problem in terms of observation completion has two key advantages: generality and low cost (label-free) training data. The objective is general, in the sense that pixelwise reconstruction places no assumptions about the future task for which the glimpses will be used. The training data is low-cost, since no manual annotations are required; the agent learns its look-around policy by exploring any visual scene or object. This assumes that capturing images is much more cost-effective than manually annotating images.

Figure 2:

Approach overview: The agent (actor) encodes individual views from the environment and aggregates them into a belief state vector. This belief is used by the decoder to get the reconstructed viewgrid. The agent’s incomplete belief about the environment leads to uncertainty over some viewpoints (red question marks). To reduce this uncertainty, the agent intelligently samples more views based on its current belief within a fixed time budget

. The agent is penalized based on the reconstruction error at the end of steps (completion loss). Additionally, we provide guidance through sidekicks (sidekick loss) which exploit the full viewgrid—only at training time—to alleviate uncertainty in training due to partial observability. The learned exploratory policy is then transferred to other tasks (top row shows four tasks we consider).

Approach overview

The active observation completion task poses three major challenges. Firstly, to predict unobserved views well, the agent must learn to understand 3D relationships from very few views. Classic geometric solutions struggle under these conditions. Instead, our reconstruction must draw on semantic and contextual cues. Secondly, intelligent action selection is critical to this task. Given a set of past observations, the system must act based on which new views are likely to be most informative, i.e., determine which views would most improve its model of the full viewgrid. We stress that the system will be faced with objects and scenes it has never encountered during training, yet still must intelligently choose where it would be valuable to look next.

As a core solution to these challenges, we present a reinforcement learning (RL) approach for active observation completion [23]

. See Figure 2. Our RL approach uses a recurrent neural network to aggregate information over a sequence of views; a stochastic neural network uses that aggregated state and current observation to select a sequence of useful camera motions. The agent is rewarded based on its predictions of unobserved views. It therefore learns a policy to intelligently select actions (camera motions) to maximize the quality of its predictions. During training, the complete viewgrid is known, thereby allowing the agent to “self-supervise” its policy learning, meaning it learns without any human-provided labels. See Materials and Methods below for the details of our approach.

We judge the quality of viewgrid reconstruction in the pixel space so as to maintain generality: all pixels for the full scene (or 3D object) would encompass all potentially useful visual information for any task. Hence, our approach avoids committing to any intermediate semantic representation, in favor of learning policies that seek generic information useful to many tasks. That said, our formulation is easily adaptable to more specialized settings. For example, if the target tasks only require semantic segmentation labels, the predictions could be in the space of object labels instead.

Reinforcement learning approaches often suffer from costly exploration stages and partial state observability. In particular, an active visual agent has to take a long series of actions purely based on the limited information available from its first person view [26, 27, 28, 23]. The most effective viewpoint trajectories are buried among many mediocre ones, impeding the agent’s exploration in complex state-action spaces.

To address this challenge, as the second main technical contribution of this work, we introduce “sidekick policy learning”. In the active observation completion task there is a natural asymmetry in observability: once deployed an active exploration agent can only move the camera to look around nearby, yet during training it can access omnidirectional viewpoints. Existing methods facing this asymmetry simply restrict the agent to the same partial observability during training [10, 25, 8, 23, 26, 29]. In contrast, our sidekick approach introduces reward shaping and demonstrations that leverage full observability during training to precompute the information content of each candidate glimpse. The sidekicks then guide the agent to visit information hotspots in the environment or sample information-rich trajectories, while accounting for the fact that observability is only partial during testing [24]. By doing so, sidekicks accelerate the training of the actual agent and improve the overall performance. We use the name “sidekick” to signify how a sidekick to a hero (e.g., in a comic or movie) provides alternate points of view, knowledge, and skills that the hero does not have. In contrast to an “expert” [30, 31], a sidekick complements the hero (agent), yet cannot solve the main task at hand by itself. See Materials and Methods below for more details.

We show that the active observation completion policies learned by our approach serve as exploratory policies that are transferable to entirely new tasks and environments. Given a new task, rather than train a policy with task-specific rewards to direct the camera, we drop in the pre-trained look-around policy. We demonstrate that policies learned via active observation completion transfer well to several semantic and geometric estimation tasks, and they even perform competitively with supervised task-specific policies (please see the look-around policy transfer section in Results).


We next present experiments to evaluate the behaviors learned by the proposed look-around agents.


For benchmarking and reproducibility, we evaluate our approach on two widely used datasets:

SUN360 Dataset for Scenes

For this dataset, our limited field-of-view () agent attempts to complete an omnidirectional scene. SUN360 [32] has spherical panoramas of 26 diverse categories. The dataset consists of 6,174 training, 1,013 validation and 1,805 testing examples. The viewgrid has 3232 resolution 2D images sampled from camera elevations ( ) and azimuths ().

ModelNet Dataset for Objects

For this dataset, our agent manipulates a 3D object to complete its viewgrid of the object seen from all viewing directions. The viewgrid constitutes an implicit image-based 3D shape model. ModelNet [10] has two subsets of CAD models: ModelNet-40 (40 categories) and ModelNet-10 (a 10 category-subset of ModelNet-40). Excluding the ModelNet-10 classes, ModelNet-40 consists of 6,085 training, 327 validation and 1,310 testing examples. ModelNet-10 consists of 3,991 training, 181 validation and 727 testing examples. The viewgrid has 3232 resolution 2D images sampled from camera elevations () and azimuths ([55]. We render the objects using substantial lighting variations in order to increase difficulty in perception. To test the agent’s ability to generalize to previously unseen categories, we always test on object categories in ModelNet-10, which are unseen during training.

For both datasets, at each timestep the agent moves within a 5 elevations5 azimuths neighborhood from the current position. Requiring nearby motions reflects that the agent cannot teleport, and it ensures that the actions have approximately uniform real-world cost. Balancing task difficulty (harder tasks require more views) and training speed (fewer views is faster) considerations, we set the training episode length a priori. By training for a target budget , the agent learns non-myopic behavior to best utilize the expected exploration time. Note that while increasing during training increases training costs considerably, doing so can naturally lead to better reconstructions (please see Supplementary for longer episode results).


We test our active completion approach with and without sidekick policy learning [56]lookaround and lookaround+spl, respectively—compared to a variety of baselines:

  • one-view is our method trained with . No information aggregation or action selection is performed by this baseline.

  • rnd-actions is identical to our approach, except that the action selection module is replaced by randomly selected actions from the pool of all possible actions.

  • large-action chooses the largest allowable action repeatedly. This tests if far-apart views are sufficiently informative.

  • peek-saliency moves to the most salient view within reach at each timestep, using a popular saliency metric [33]. To avoid getting stuck in a local saliency maximum, it does not revisit seen views. Note that this baseline peeks at neighboring views prior to action selection to measure saliency, giving it an unfair and impossible advantage over our methods and the other baselines.

These baselines all use the same network architecture as our methods, differing only in the exploration policy which we seek to evaluate. In the interest of evaluating on a wide range of starting positions, we evaluate each method times on each test viewgrid, starting from all possible view points.

Active observation completion results

(a) Pixelwise MSE errors vs. time on both datasets.
Method SUN360 ModelNet unseen classes
average adversarial average adversarial
mean mean mean mean
one-view 41.85 - 70.44 - 12.57 - 20.32 -
rnd-actions 32.42 22.52 54.66 22.39 10.30 18.06 14.01 31.04
large-actions 31.68 24.28 42.56 39.57 10.18 18.97 13.04 35.79
peek-saliency 31.80 24.02 45.24 35.76 10.22 18.67 12.78 37.05
lookaround 25.14 39.91 30.44 56.79 9.60 23.60 12.01 40.88
lookaround+spl 24.24 42.06 30.75 56.34 9.40 25.23 11.85 41.67
(b) Average/adversarial MSE error ( lower is better) and corresponding improvements (%) over the one-view model ( higher is better) on both datasets.
Figure 3: Scene and object completion accuracy under different agent behaviors. Top plots (a) show error rates over time as more glimpses are acquired, and bottom table (b) shows errors/improvements after all glimpses are acquired.

We show the results of scene and object completion on SUN360 and ModelNet (unseen classes) in Figure 3b. The metrics “average” and “adversarial” measure the expected value of the average and maximum pixelwise mean squared errors (MSE) over all starting points for a single sample, respectively. While the former measures the average expected performance, the latter measures the worst-case performance when starting from the hardest place in each sample (averaged over examples). We additionally report the relative improvement of each model over one-view in order to isolate the gains obtained due to action selection over a pre-trained model. Since all methods share the same pre-training stage of one-view, this metric provides an apples-to-apples measure of how well the different strategies for moving perform. All methods are evaluated over time steps in accordance with the training budget unless stated otherwise.

As expected, all methods that acquire multiple glimpses outperform one-view by taking advantage of the extra information that is available from additional views. Both of our approaches lookaround and lookaround+spl significantly outperform the others on all settings. The peek-saliency agent hovers near the most salient views in the neighborhood of the starting view since nearby views tend to have similar saliency scores. The large-actions agent’s accuracy often tends to saturate near the top or bottom of the viewgrid after reaching the environment boundaries. Compared to these behaviors, intelligent sampling of actions using our learned policy leads to significant improvements. Using sidekicks in lookaround+spl improves performance and convergence speed. This is consistent with our results reported in [24] and demonstrates the advantage of using sidekicks. The faster convergence of lookaround+spl is shown in the Supplementary Materials.

Whereas Figure 3b shows the agents’ ultimate ability to infer the entire scene (object), Figure 3a shows the reconstruction errors as a function of time. As we can see, the error reduces consistently over time for all methods, but it drops most sharply for lookaround and lookaround+spl. Faster reduction in the reconstruction error indicates more efficient information aggregation.

Visualizations of the agent’s evolving internal belief state echo this quantitative trend. Figure 4 shows observation completion episodes from the lookaround agent along with the ground truth viewgrid, viewing angles selected by the agent, and reconstruction errors over time. We show the SUN360 viewgrids in equirectangular projection for better visualization. Initially, the agent exhibits significant uncertainty in its belief, as seen in the poorly decoded reconstructions and large MSE values. However, over time, it actively samples views that quickly improve the reconstruction quality.

Figures 5 and 6 visualize the ultimate reconstructions after all glimpses are acquired [57]. For contrast, we also display the results for rnd-actions in Figure 5. The policies learned by our agent lead to more realistic and accurate reconstructions. Though the agent only sees about 15% of all the pixels, its choice of informative glimpses allows it to anticipate the remainder of the novel scene or object. Movie S1 in the Supplementary Materials shows walkthroughs of the reconstructed environments from the agent’s egocentric point of view.

Figure 4: Episodes of active observation completion for SUN360 (left) and ModelNet (right). For each example, the first row on the left shows the ground-truth viewgrid, the subsequent rows on the left show the reconstructions at times along with the pixelwise MSE error () and the agent’s current glimpse (marked in red). On the right, the sampled viewing angles of the agent at each time step are shown on the viewing sphere (marking the agent’s viewpoint and field-of-view using a red arrow and outline on the sphere). The reconstruction quality improves over time as it quickly refines the scene structure and object shape.
Figure 5: Three examples of reconstructions after glimpses (in order to generate more complete images). The first column shows the ground-truth viewgrids (equirectangular projections for SUN), the second column shows the corresponding GAN-refined reconstructions of lookaround and rnd-actions agents, and the third column shows handpicked unseen views (marked on the ground-truth) and the corresponding angles. Please see Supplementary for more GAN refinement details. Best viewed on PDF with zoom. Using an intelligent policy, lookaround captures more information from the scene leading to more realistic reconstructions (examples 1 and 3). While rnd-actions leads to realistic reconstructions on example 2, its textures and content differ from the ground truth, especially on the ground. Note that the bounding boxes over views are warped to curves on the equirectangular projection for SUN360.
Figure 6: The ground truth 360 panorama or viewgrid, agent glimpse inputs, and final GAN-refined reconstructions for multiple environments from SUN360 and ModelNet. See also the video provided in the Supplementary Materials.

Look-around policy transfer

Having shown that our unsupervised approach successfully trains policies to acquire visual observations useful for completion, we next test how well the policies transfer to new tasks. Recall, our hypothesis is that the glimpses acquired to maximize completion accuracy will transfer well to solve perception tasks efficiently, since they are chosen to reveal maximal information about the full environment or object.

To demonstrate transfer, we first train a rnd-actions model for each of the target tasks (“model A”) and a lookaround model for the active observation completion task (“model B”). The policy from model B is then used to select actions for the target task using model A’s task head (see details in the unsupervised policy transfer section in Materials and Methods). In this way, the agent learns to solve the task given arbitrary observations, then inherits our intelligent look-around policy to (potentially) solve the task more quickly—with fewer glimpses. The successful outcome will be if the look-around agent can solve the task with similar efficiency as a supervised task-specific policy, despite being unsupervised and task-agnostic. We test policy transferability for the following four tasks.

Task 1: Active categorization

The first task is category recognition: the agent must produce the category name of the object or scene it is exploring. We plug look-around policies into the active categorization system from [8] and follow a similar setup. For ModelNet, we train model A on ModelNet-10 training objects, and the active observation completion model (model B) on ModelNet-40 training objects, which are disjoint classes from those in the target ModelNet-10 dataset. For SUN360, both models are trained on SUN360 training data. We replicate the results from [8] and use the corresponding architecture and training strategies. In particular, the classification head is trained with a cross-entropy loss over the set of classes and the supervised reward function for policy learning is the negative of the classification loss at the end of the episode. We refer the readers to [8] for the full details. Performance is measured using classification accuracy on the test set.

Task 2: Active surface area estimation

The second task is surface area estimation. The agent starts by looking at some view of the object and must intelligently select subsequent viewing angles to estimate the 3D object’s surface area. The task is relevant for a robot that needs to interact with an unfamiliar object. The 3D models from ModelNet-10 are converted into 50x50x50 voxel occupancy grids. The true surface area is the number of unoccupied voxels that are adjacent to occupied voxels. Estimation is posed as a regression task where the agent predicts a normalized metric value between 0 and 1. Performance is measured using the relative MSE between predicted and ground truth areas on the test set, i.e, if the ground truth and predicted areas are , respectively, the error for one example is . This normalizes the error so that it remains comparable across objects of different sizes.

Task 3: Active light source localization

In the third task, the agent is required to localize the sources of light present surrounding the 3D object. To design a controlled experimental setting, when rendering the ModelNet objects, we place a single light source randomly at any one of two possible azimuths and four possible elevations relative to the object (see Figure S2 in Supplementary). The task is posed as a four-way classification problem where the agent is required to identify the correct elevation (irrespective of the azimuth, such that there can be no unfair orientation bias). Performance is measured using localization accuracy on the test set.

Task 4: Active pose estimation

The fourth task is camera pose estimation. Having explored the environment, the agent is required to identify the elevation and relative azimuth of a given reference view. We propose a simple solution to this problem. By using the agent’s reconstruction after time-steps, we measure the -distance between the given view and each of the reconstructed views. The elevation and azimuth of the reconstructed view leading to the smallest -distance is predicted as the pose. The agent uses its own decoder as opposed to the decoder from rnd-actions as done in previous tasks. We do not evaluate pose estimation on ModelNet due to the ambiguity arising from symmetric objects. The models are evaluated using the absolute angular error (AE) in (1) elevation and (2) azimuth predictions, denoted by ‘AE azim.’ and ‘AE elev.’ in Table 1. During evaluation, the starting positions of the agent are selected uniformly over the grid of views. The reference view is sampled randomly from the viewgrid for each episode.

For baselines, we use one-view, rnd-actions, large-action, peek-saliency (defined in the previous section) and supervised. supervised is a policy that is trained specifically on the training objective for each task, i.e., with task-specific rewards.

We compare the transfer of lookaround and lookaround+spl to these baselines in Table 1. The transfer performance of our policies is better than that of rnd-actions on all tasks. This shows that intelligent sequential camera control has scope for improving these perception tasks’ efficiency. Overall, our look-around policy transfers well across tasks, competing with or even outperforming the supervised task-specific policies. Furthermore, our look-around policies consistently perform the tasks better than the baseline policies for glimpse selection based on saliency or large actions.

For active recognition on ModelNet, most of the methods perform similarly. On that dataset, recognition with a single view is already fairly high, leaving limited headroom for improving with additional views, intelligently selected or otherwise. On pose estimation, our learned policies outperform the baselines as expected since the reconstructions generated by our agents are more accurate. On light source localization, our policies show competitive results and come close to the performance of supervised. They also significantly outperform the remaining baselines, demonstrating successful transfer. For surface area estimation, we observe that all methods, including the supervised policies, manage only marginal gains over one-view. We believe that this is an indication of the difficulty of these tasks, as well as the necessity for more 3D-specific architectures such as those that produce voxel grids, point clouds, or surface meshes as output [35, 36, 37].

These results demonstrate the effectiveness of learning active look-around policies via observation completion on unlabelled datasets—without task-specific rewards. As we see in Table 1, such policies can successfully transfer to a wide range of perception tasks and often perform on par with supervised task-specific policies.

SUN360 ModelNet
Task Active recogn. Pose estimation. Active recogn. Light source loc. Surface area
Method Accuracy AE azim. AE elev. Accuracy Accuracy RMSE
one-view 51.94 75.74 30.32 83.60 58.74 21.22
rnd-actions 62.90 66.18 19.53 88.46 72.97 19.04
large-action 63.73 67.57 19.94 89.05 75.14 18.38
peek-saliency 64.20 65.46 19.76 88.74 71.19 18.85
supervised 68.21 51.36 9.81 88.58 86.30 18.43
lookaround 68.89 50.00 9.94 89.00 83.29 18.82
lookaround+spl 69.32 47.13 9.36 89.38 83.08 18.14
Table 1: Transfer results: lookaround and lookaround+spl are transferred to the rnd-actions task-heads from each task. The same unsupervised look-around policy successfully accelerates a variety of tasks—even competing well with the fully supervised task-specific policy (supervised).


We propose the task of active observation completion to facilitate learning look-around behaviors in a task-independent way. Our proposed approach outperforms several baselines and effectively anticipates the high-level properties of the environment, having observed only a small fraction of the scene or 3D object. We further show that adding the proposed RL sidekicks leads to faster training and convergence to better policies (Figures 3 and S3). Once look-around behaviors are learned, we show that they can be effectively transferred to a wide range of semantic and geometric tasks where they at times even outperform supervised policies trained in a traditional task-specific manner (Table 1).

While we are motivated to devise sidekick policy learning for active visual exploration, it is more generally applicable whenever an RL agent can access greater observability during training than during deployment. For example, agents may operate on first-person observations during test-time, yet have access to multiple sensors during training in simulation environments [38, 39, 40]. Similarly, an active object recognition system [8, 25, 11, 10, 29] can only see its previously selected views of the object; yet if trained with CAD models, it could observe all possible views while learning. Future work can explore sidekicks in such scenarios.

Despite the promising results, our approach does have several shortcomings and our work points to several interesting directions for future work. While the agent is moving from one view to another, it does not use the information available during this motion. This is reasonable since allowable actions are confined to a neighborhood of the current observation, and hence relatively close in 3D world space. Still, an interesting setting would be to use the sequence of views obtained while the action is being executed.

Secondly, our current action space is discretized to promote training efficiency, and we assume that each action has unit cost and optimize the agent to perform well for a fixed cost budget. The unit cost is approximately correct given the locality of the action space. Nonetheless, it could be interesting to adapt to free-range actions with action-specific costs by allowing the agent to sample any action (continuous or discrete) and penalizing it based on the cost of that action. Such costs could be embodiment-specific. For example, humanoid robots may find it easier to move forward when compared to turning and walking, whereas wheeled robots can perform both motions equally well. Such a formulation would also naturally account for the sequence of views seen during action execution. Furthermore, as an alternative to training the agent to make non-myopic camera motions to best reduce reconstruction error in a fixed budget of glimpses, one could instead formulate the objective in terms of a fixed threshold on reconstruction error, and allow the agent to move until that threshold is reached. The former (our formulation) is valuable for scenarios with hard resource constraints; the latter is valuable for scenarios with hard accuracy constraints.

A third limitation of the current approach is that in practice we find that the diversity of actions selected by our learned policies is sometimes limited. The agent often tends to prefer a reduced action space of two or three actions depending on the starting point and the environment, despite using a loss term explicitly encouraging high entropy of selected actions. We believe that this could be related to optimization difficulties commonly associated with policy-gradient-based reinforcement learning, and improvements on this front would also improve the performance of our approach.

Our approach is also affected by a well-known limitation associated with rectangular representations of spherical environments [53]

where information at the poles are oversampled compared to the central elevations, resulting in redundant information across different azimuths at the poles. This is further exacerbated in realistic scenes where the poles often represent the sky, floor and ceiling which tend to have limited diversity. Due to this issue, we observed that heuristic policies which sample constant actions while avoiding the poles compete strongly with learned approaches and even outperform supervised policies in some cases. We found that incorporating priors which encourage the agent to move away from the poles would result in consistent performance gains for our method as well. One future direction to avoid the issue would be to design environments that have varying azimuths across elevations.

Another drawback is that our current testbeds handle only camera rotations, not translations. In future work, we will extend our approach to 3D environments that also permit camera translations [41, 42]. In such scenarios, intelligent look-around behavior becomes even more essential, since no matter what visual sensors it has, an agent must move its camera to observe another room. We also plan to consider other tasks for transfer such as target-driven navigation [43] and model-based RL [44, 45], where a preliminary exploratory stage is crucial for performing well on downstream tasks.

Finally, it will be interesting future work to explore how multiple sensing modalities could work together to learn look-around behavior. For example, an agent that hears a sudden noise from one direction might learn to look there to gain new information about dynamic objects in the scene. Or, an agent that sees an unfamiliar texture might reach out to touch the object surface to better anticipate its shape.

Materials and Methods

In this final section, we overview the implementation of our approach. Complete implementation details are provided in the Supplementary Materials.

Recurrent observation completion network

Figure 7: Architecture of our active observation completion system. While the input-output pair shown here is for the case of scenes, we use the same architecture for the case of 3D objects. In the output viewgrid, solid black portions denote observed views, question marks denote unobserved views, and transparent black portions denote the system’s uncertain contextual guesses.

We now discuss the recurrent neural network used for active observation completion. The architecture naturally splits into five modules with distinct functions: sense, fuse, aggregate, decode, and act. Architecture details for all modules are given in Figure 7.

Encoding to an internal model of the target

First, we define the core modules with which the agent encodes its internal model of the current environment. At each step , the agent is presented with a 2D view captured from a new viewpoint . We stress that absolute viewpoint coordinates are not fully known, and objects/scenes are not presented in any canonical orientation. All viewgrids inferred by our approach treat the first view’s azimuth as the origin. We assume only that the absolute elevation can be sensed using gravity, and that the agent is aware of the relative motion from the previous view. Let denote this proprioceptive metadata (elevation, relative motion).

The sense module processes these inputs in separate neural network stacks to produce two vector outputs, which we jointly denote as (see Figure 7, top left). fuse combines information from both input streams and embeds it into (Figure 7, top center). Then this combined sensory information from the current observation is fed into aggregate

, which is a long short term memory module (LSTM) 

[46]. aggregate maintains an encoded internal model of the object/scene under observation to “remember” all relevant information from past observations. At each timestep, it updates this code, combining it with the current observation to produce (Figure 7, top right).

sense, fuse, and aggregate together encode observations into an internal state that is used to produce the output viewgrid and select the action, respectively, as we detail next.

Decoding to the inferred viewgrid

decode translates the aggregated code into the predicted viewgrid . To do this, it first reshapes into a sequence of small 2D feature maps (Figure 7, bottom right), before upsampling to the target dimensions using a series of learned up-convolutions. The final up-convolution produces maps, one for each of the views in the viewgrid. For color images, we produce maps, one for each color channel of each view. This is then reshaped into the target viewgrid (Figure 7, bottom center). Seen views are pasted directly from memory.

Acting to select the next viewpoint to observe

Finally, act processes the aggregate code to issue a motor command (Figure 7, middle right). For objects, the motor commands rotate the object (i.e., agent manipulates the object or peers around it); for scenes, the motor commands move the camera (i.e., agent turns in the 3D environment). Upon execution, the observation’s pose updates for the next timestep to . For , is randomly sampled, corresponding to the agent initially encountering the new environment or object from an arbitrary pose.

Internally, act first produces a distribution over all possible actions, and then samples from this distribution. We restrict act to select small discrete actions at each timestep to approximately simulate continuous motion. Once the new viewpoint is set, a new view is captured and the whole process repeats. This happens until timesteps have passed, involving actions. These modules are learned end-to-end in a policy learning framework as described in the section below on the policy learning formulation.

Sidekick policy learning

We now describe the sidekicks used to learn faster and converge to better policies under partial observability. In order to effectively learn to perform the task, the agent has to use the limited information available from its egocentric view to (1) aggregate information, (2) select intelligent actions to improve its training, and (3) decode the entire viewgrid. This poses significant hurdles for policy learning under partial observability, that is, making decisions while lacking full state knowledge. For example, our agent will not know the entire 360 environment before it must decide where to look next. In order to address these issues, we propose sidekicks that exploit full observability available exclusively during training to aid policy learning of the ultimate agent. The key idea is to solve a simpler problem with relevance to the actual look-around task using full observability, and then transfer the knowledge to the main agent. We define two types of sidekicks, reward-based and demonstration-based.

Reward-based sidekick

The reward-based sidekick aims to identify a set of highly informative views in the environment by exploiting full observability during training. It considers a simplified completion problem where the goal is evaluate the information content of individual views themselves, i.e., to identify information hotspots in the environment that strongly suggest other parts of the environment. For example, it might learn that facing the blank ceiling of a kitchen is less informative than looking at the contents of the refrigerator or stove.

To evaluate the informativeness of a candidate view, the sidekick sees how well the entire environment can be reconstructed given only that view. We train a completion model that can reconstruct from any single view (i.e., we set ). The score assigned to a candidate view is inversely proportional to the reconstruction error of the entire environment given only that view. The sidekick conveys the results to the agent during policy learning in the form of an augmented reward at each time step. Please see the section on sidekick policy learning in the Supplementary for more details.

Demonstration-based sidekick

Our second sidekick generates trajectories of informative views through a sidekick policy . In a trajectory, the informativeness of the current view is conditioned on the past views selected, as opposed to sampling individually informative views. To condition the informativeness on past views, we use a cumulative coverage score (see Eqn. 9, 10 in Supplementary) that measures the amount of information gathered about different parts of the environment until time . The goodness of a view is measured by the increase in cumulative coverage obtained upon selecting that view, i.e., how well it complements the previously selected views. Please see the section on sidekick policy learning in the supplementary material for full details.

The demonstration sidekick uses this coverage score to sample informative trajectories. Given a starting view in , the demonstration sidekick selects a trajectory of views that are jointly maximize the coverage of . At each time step, the demonstration sidekick evaluates the gain in cumulative coverage obtained by sampling each view in its neighborhood and then greedily samples the best view (see Eqn. 11 in Supplementary).

We use sidekick-generated trajectories as supervision to the agent for a short preparatory period. The goal is to initialize the agent with useful insights learned by the sidekick to accelerate training of better policies. We achieve this through a hybrid training procedure that combines imitation and reinforcement, as described in the demonstration-based sidekick section in the supplementary material.

Policy learning formulation

Having defined the recurrent network model and the sidekick policy preparation, we now describe the policy learning framework used to train our agent as well as the mechanisms used to incorporate sidekick rewards () and demonstrations (obtained from ). All modules are jointly optimized end-to-end to improve the final reconstructed viewgrid , which contains predicted views for all viewpoints . The agent learns a policy which returns a distribution over actions for the aggregated internal representation at time . Let denote the set of camera motions available to the agent. Our agent seeks the policy that minimizes reconstruction error for the environment given a budget of camera motions (views). Let represent the weights of the Sense, Fuse, Aggregate, Decode, and Act modules. If we denote the set of weights of the network by , and excluding by , and excluding by , then the overall weight update is:


where is the number of training samples, indexes over the training samples, and are constants and and update all parameters except and , respectively.

The pixel-wise MSE reconstruction loss () and corresponding weight update at time are as follows:


where denotes the reconstructed view at viewpoint and time , denotes the pixelwise reconstruction MSE, and denotes the offset to account for the unknown starting azimuth [23].

The agent’s reward at time consists of the intrinsic reward from the sidekick and the negated final reconstruction loss, :


The sidekick reward serves to densify the rewards by exploiting full observability, thereby reducing uncertainty during policy learning. Please see Supplementary for the exact form of .

The update from the policy consists of an actor-critic update, with a baseline

to reduce variance, and supervision from the demonstration sidekick:


We adopt the baseline as the value function from an actor-critic [48] method to update the Act module. The demonstration sidekick’s supervision is defined below in Eqn. 5. The act

term additionally includes a loss to update the learned value network and entropy regularization to promote diversity in action selection (please see additional loss functions in Supplementary).

Whereas the reward sidekick augments rewards, the demonstration sidekick instead influences policy learning by directly supervising the early rounds of action selection. This is achieved through a cross entropy loss between the sidekick’s policy and the agent’s policy :


Please see the sidekick policy learning section in Supplementary for the exact form of .

We pretrain the Sense, Fuse, and Decode modules with . The full network is then trained end-to-end (with Sense and Fuse frozen). For training with sidekicks, the agent is augmented either with additional rewards from the reward sidekick (Eqn. 3) or an additional supervised loss from the demonstration sidekick (Eqn. 5).

Unsupervised policy transfer to unseen tasks

We now describe the mechanism used to transfer policies learned in an unsupervised fashion via active observation completion to new perception tasks requiring sequential observations. This section details the process overviewed above in the look-around policy transfer section. The main idea is to inject our generic look-around policy into new unseen tasks in unseen environments. In particular, we consider transferring our policy—trained with neither manual supervision nor task-specific reward—into various semantic and geometric recognition tasks for which the agent was not specifically trained. Recall, we consider four different tasks: recognition, surface area estimation, light source localization, and camera pose estimation.

At training time, we train an end-to-end task-specific model (model A) with a random policy (rnd-actions), and an active observation completion model (model B). Note that our completion model is trained without supervision to look around environments that have zero overlap with model A’s test set. Furthermore, even the categories seen during training may differ from those during testing. For example, the agent might see various furniture categories during training (bookcase, bed, etc.), but never a chair, yet it must generalize well to look around a chair.

At test time, both the task-specific model A and the active observation model B receive and process the same inputs at each timestep. The task-specific model does not have a learned policy of its own, since it was trained with a policy that samples random actions. At each timestep, model B selects actions to complete its internal model of the new environment based on its look-around policy. This action is then communicated to model A in place of the random-actions with which it was trained. Therefore, model A gathers its information based on the actions provided by model B. Model A now makes a prediction for the target task. If the policy learned in model B is truly generic, it will intelligently explore to solve the new (unseen) tasks despite never receiving task-specific reward for any one of them during training.

List of Supplementary Materials

The supplementary PDF file includes:

Text to augment the implementation details in Materials and Methods
Figure S1. Sidekick Framework
Figure S2. Light source localization example
Figure S3. Convergence of sidekick policy learning
Figure S4. Training on different target budgets
Figure S5. Episodes of active observation completion
Figure S6. GAN refinement

Other Supplementary Material for this manuscript include:

Movie S1 (.mp4 format). Sample walkthroughs in reconstructed environments
Movie S2 (.mp4 format). Active observation completion on SUN360
Movie S3 (.mp4 format). Active observation completion on ModelNet

References and Notes

  • [1] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 2015.
  • [2] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, 2014.
  • [3] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  • [4] Kasey C Soska and Scott P Johnson. Development of three-dimensional object completion in infancy. In Child development, 2008.
  • [5] Kasey C Soska, Karen E Adolph, and Scott P Johnson. Systems in development: motor skill acquisition facilitates three-dimensional object completion. In Developmental psychology, 2010.
  • [6] Philip J Kellman and Elizabeth S Spelke. Perception of partly occluded objects in infancy. In Cognitive psychology, 1983.
  • [7] Antonio Torralba, Aude Oliva, Monica S Castelhano, and John M Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. In Psychological review, 2006.
  • [8] Dinesh Jayaraman and Kristen Grauman. Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion. In European Conference on Computer Vision, 2016.
  • [9] Mohsen Malmir, Karan Sikka, Deborah Forster, Javier R Movellan, and Garison Cottrell.

    Deep q-learning for active recognition of germs: Baseline performance on a standardized dataset for active learning.

    In British Machine Vision Conference, 2015.
  • [10] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In

    Computer Vision and Pattern Recognition, IEEE Conference on

    , 2015.
  • [11] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Košecká, and Alexander C Berg. A dataset for developing and benchmarking active vision. In Robotics and Automation, IEEE International Conference on, 2017.
  • [12] Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In Computer Vision and Pattern Recognition, IEEE Conference on, 2016.
  • [13] S. Mathe, A. Pirinen, and C. Sminchisescu. Reinforcement learning for visual object detection. In Computer Vision and Pattern Recognition, IEEE Conference on, 2016.
  • [14] S. Karayev, T. Baumgartner, M. Fritz, and T. Darrell. Timely object recognition. In Advances in Neural Information Processing Systems, 2012.
  • [15] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In

    International Conference on Machine Learning

    , 2017.
  • [16] Tao Chen, Saurabh Gupta, and Abhinav Gupta. Learning exploration policies for navigation. In International Conference on Learning Representations, 2019.
  • [17] Benjamin Hepp, Debadeepta Dey, Sudipta N. Sinha, Ashish Kapoor, Neel Joshi, and Otmar Hilliges. Learn-to-score: Efficient 3d scene exploration by predicting view utility. In The European Conference on Computer Vision, September 2018.
  • [18] Shuran Song, Andy Zeng, Angel X Chang, Manolis Savva, Silvio Savarese, and Thomas Funkhouser. Im2pano3d: Extrapolating 360° structure and semantics beyond the field of view. In Computer Vision and Pattern Recognition, IEEE Conference on, pages 3847–3856, 2018.
  • [19] Dinghuang Ji, Junghyun Kwon, Max McFarland, and Silvio Savarese. Deep view morphing. In Computer Vision and Pattern Recognition, IEEE Conference on, volume 2, 2017.
  • [20] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in neural information processing systems, pages 2539–2547, 2015.
  • [21] Dinesh Jayaraman, Ruohan Gao, and Kristen Grauman. Shapecodes: Self-supervised feature learning by lifting views to viewgrids. European Conference on Computer Vision, 2018.
  • [22] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al.

    Neural scene representation and rendering.

    Science, 360(6394):1204–1210, 2018.
  • [23] Dinesh Jayaraman and Kristen Grauman. Learning to look around: Intelligently exploring unseen environments for unknown tasks. In Computer Vision and Pattern Recognition, IEEE Conference on, 2018.
  • [24] Santhosh K. Ramakrishnan and Kristen Grauman. Sidekick Policy Learning for Active Visual Exploration. In European Conference on Computer Vision, 2018.
  • [25] Edward Johns, Stefan Leutenegger, and Andrew J Davison. Pairwise decomposition of image sequences for active multi-view recognition. In Computer Vision and Pattern Recognition, IEEE Conference on, 2016.
  • [26] Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. Visual Semantic Planning using Deep Successor Representations. In Computer Vision, IEEE International Conference on, 2017.
  • [27] Saurabh Gupta, David Fouhey, Sergey Levine, and Jitendra Malik. Unifying map and landmark based representations for visual navigation. arXiv preprint arXiv:1712.08125, 2017.
  • [28] Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In Robotics and Automation, IEEE International Conference on, 2017.
  • [29] D. Jayaraman and K. Grauman. End-to-end policy learning for active visual categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
  • [30] Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In Advances in Neural Information Processing Systems, 2014.
  • [31] Vladimir Vapnik and Rauf Izmailov. Learning with intelligent teacher. In Symposium on Conformal and Probabilistic Prediction with Applications, 2016.
  • [32] Jianxiong Xiao, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Recognizing scene viewpoint using panoramic place representation. In Computer Vision and Pattern Recognition, IEEE Conference on, 2012.
  • [33] Jonathan Harel, Christof Koch, and Pietro Perona. Graph-based visual saliency. In Advances in Neural Information Processing Systems, 2006.
  • [34] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Computer Vision and Pattern Recognition, IEEE Conference on, pages 5967–5976. IEEE, 2017.
  • [35] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
  • [36] Haoqiang Fan, Hao Su, and Leonidas J. Guibas. A point set generation network for 3d object reconstruction from a single image. In Computer Vision and Pattern Recognition, IEEE Conference on, July 2017.
  • [37] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. arXiv preprint arXiv:1804.01654, 2018.
  • [38] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on Robot Learning, 2017.
  • [39] Lerrel Pinto, Marcin Andrychowicz, Peter Welinder, Wojciech Zaremba, and Pieter Abbeel. Asymmetric actor critic for image-based robot learning. Robotics: Science and Systems, 2018.
  • [40] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. In Computer Vision and Pattern Recognition, IEEE Conference on, 2018.
  • [41] Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. Building generalizable agents with a realistic and rich 3d environment. arXiv preprint arXiv:1801.02209, 2018.
  • [42] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Computer Vision and Pattern Recognition, IEEE Conference on, 2018.
  • [43] Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. International Conference on Learning Representations, 2018.
  • [44] David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
  • [45] AJ Piergiovanni, Alan Wu, and Michael S Ryoo. Learning real-world robot policies by dreaming. arXiv preprint arXiv:1805.07813, 2018.
  • [46] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • [47] Radford M Neal. Learning stochastic feedforward networks. Department of Computer Science, University of Toronto, 64(9), 1990.
  • [48] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction.
  • [49] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • [50] Alessandro Giusti, Jérôme Guzzi, Dan C Cireşan, Fang-Lin He, Juan P Rodríguez, Flavio Fontana, Matthias Faessler, Christian Forster, Jürgen Schmidhuber, Gianni Di Caro, et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 2016.
  • [51] Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba.

    One-shot imitation learning.

    In Advances in Neural Information Processing Systems, 2017.
  • [52] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [53] Coors, Benjamin and Paul Condurache, Alexandru and Geiger, Andreas. Spherenet: Learning spherical representations for detection and classification in omnidirectional images. In Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [54] For simplicity of presentation, we represent an “environment” as where the agent explores a novel scene, looking outward in new viewing directions. However, experiments will also use as an object where the agent moves around an object, looking inward at it from new viewing angles. Figure 1 illustrates the two scenarios.
  • [55] The angles were selected to break symmetry and reduce redundancy of views.
  • [56] For the sake of brevity, we report the best performances among the two sidekick variants we proposed in [24].
  • [57] We refine the decoded viewgrids (for both our method and the baseline) with a pix2pix [34]-style conditional Generative Adversarial Network (GAN), detailed in the Supplementary Materials.

Supplementary Materials

Sidekick Policy Learning

We now describe the exact forms of the sidekick reward and the sidekick policy learned by the reward-based and demonstration-based sidekicks, respectively.

Figure S1: Sidekick Framework: Top left shows the environment’s viewgrid, indexed by viewing elevation and azimuth. Top: Reward sidekick — scores individual views based on how well they alone permit inference of the viewgrid (Eq 6). The grid of scores (center) is post-processed with non-max suppression to prioritize non-redundant views (right), then is used to shape the agent’s rewards. Bottom: Demonstration sidekick — Left “grid-of-grids” displays example coverage score maps (Eq 7) for all view pairs. The outer grid considers each , and each inner grid considers each for the given (bottom left). A pixel in that grid is bright if coverage is high for given , and dark otherwise. Each denotes an (elevation, azimuth) pair. While observed views and their neighbors are naturally recoverable (brighter), the sidekick uses broader environment context to also anticipate distant and/or different-looking parts of the environment, as seen by the non-uniform spread of scores in the left grid. Given the coverage function and a starting position, this sidekick selects actions to greedily optimize the coverage objective (Eq 8). The bottom right strip shows the cumulative coverage maps as each of the =4 glimpses is selected.

Reward-based sidekick

As mentioned in Materials and Methods, the score assigned to a candidate view is inversely proportional to the reconstruction error of the entire environment given only that view. Here, we elaborate on how this computation is performed. Let denote the decoded reconstruction for given only view as input. This is obtained from the one-view model which was originally presented as a baseline in Results. The sidekick scores the information in observation as:


where denotes the reconstruction error and is the fully observed environment. We use a simple pixelwise reconstruction MSE loss for to quantify information. Higher-level losses, e.g., for detected objects, could be employed when available; pixel loss is most general in that it avoids committing to any particular label space or task. The scores are normalized to lie in across the different views of . The sidekick scores each candidate view. Then, in order to sharpen the effects of the scoring function and avoid favoring redundant observations, the sidekick selects the top most informative views with greedy non-maximal suppression. It iteratively selects the view with the highest score and suppresses all views in the neighborhood of that view until views are selected.

This computation yields a map of favored views for each training environment. See Figure S1, top row. The map shows the sidekick reward that is provided to the agent for visiting each view in the environment, i.e., where is the view visited by the agent at time . Note that while the sidekick indexes views in absolute angles, the agent will not; all its observations are relative to its initial (random) glimpse direction. This works because the sidekick becomes a part of the environment, meaning it attaches rewards to the true views of the environment. In short, the reward-based sidekick shapes rewards based on its exploration with full observability.

Demonstration-based sidekick

As mentioned in Materials and Methods, the demonstration sidekick selects a trajectory of views that are deemed to be most informative about the environment . In contrast to the reward-based sidekick, the informativeness of a view is conditioned on the previously selected views and is quantified using coverage. Here, we describe the formulation of coverage used, greedy view sampling, and the subsequent supervision provided to the main agent.

Coverage reflects how much information contains about all other views in . The coverage score for view upon selecting view is:


where denotes an inferred view within , as estimated using the same completion network used by the reward-based sidekick, and is again the MSE loss function computed using distance. Coverage scores are normalized to lie in for :


The goal of the demonstration sidekick is to maximize the coverage objective (Eqn. 8), where denotes the sequence of selected views, and saturates at 1. In other words, it seeks a sequence of reachable views such that all environment views are explained as well as possible. See Figure S1, bottom panel.

The policy of the sidekick () is to greedily select actions based on the coverage objective. The objective encourages the sidekick to select views such that the overall information obtained about each view in is maximized. At each time step, the sidekick looks at all available actions and samples the action that leads to maximum increase in coverage :


In order to aid the main agent’s training, we provide the generated trajectories as supervision. We achieve this through a hybrid training procedure that combines imitation and reinforcement. In particular, for the first time steps, we let the sidekick drive the action selection and train the policy based on that supervised objective. For steps to , we let the agent’s policy drive the action selection and use actor-critic [48] to update the agent’s policy (more on this in the next section). We start with and gradually reduce it to

in the preparatory sidekick phase (reduction of 1 after every 50 epochs of training). This step relates to behavior cloning 

[49, 50, 51], which formulates policy learning as supervised action classification given states. However, unlike typical behavior cloning, the sidekick is not an expert. It solves a simpler version of the task, then backs away as the agent takes over to train with partial observability.

Implementation details

Key notations:

  • - proprioception input. It consists of the relative change in elevation and azimuth from to together with the absolute elevation at .

  • - input view. Its dimensionality is , where is the number of channels, is the image height, and is the image width. For SUN360, (color images, three channels), and for ModelNet, (grayscale images, one channel).

  • - number of azimuths in ( for SUN360 and for ModelNet).

  • - number of elevations in ( for SUN360 and for ModelNet).

  • - action taken at time .

We build on the publicly available implementation from [24]

in PyTorch. We use a one-layer recurrent neural network (RNN) with the hidden state size fixed to 256. We use the Adam optimizer with a learning rate of

, weight decay of 1e-6, and other default settings from PyTorch 222Please refer to http://pytorch.org/docs/master/optim.html. We set and based on grid search (see Eqn. 1). All models are trained with three different random seeds and the results are averaged over them. In the case of the demonstration-based sidekick, we decay from to after every 50 epochs. For the reward-based sidekick, we decay the rewards by a factor of after every epochs (selected based on grid search). All the models are trained for epochs. For the reward-based sidekick, we use a non-maximal suppression neighborhood of and views for SUN360, and a neighborhood of and views for ModelNet. The neighborhood and number of views were selected manually upon brief visual inspection of a few reconstructed viewgrids for each dataset to ensure sufficient spread of rewards on the viewgrid. All code and datasets will be made available.

Light source localization example

As we explain in the main paper, light source localization is posed as a four-way classification problem where each class is identified by the elevation of the light source. The azimuth of the light source is varied uniformly randomly for each class. Figure S2 illustrates this process with an example.

Figure S2: Light source localization example: The different classes are shown in the first column, identified by the elevation of the light source position. The azimuth is varied randomly within each class. The camera positions from which views have been rendered are shown in the top row. Each rendering of the table, indexed by the corresponding class and camera positions, is shown. For class 1, the light source is placed below the object. This is clearly seen in the renderings where images captured from below or towards the sides of the object are lighted and the top view is completely dark. On the other extreme, for class 4, the light source is placed above the table. This is also witnessed in the renderings of the table where the top view is maximally lighted and the bottom view is completely dark.

Faster convergence of Sidekick Policy Learning

As discussed in the main paper, sidekicks lead to faster convergence and better policies by exploiting full observability at training time and guiding the main agent’s training. In Figure S3, we demonstrate the faster convergence in validation errors observed during 1000 epochs of training.

Figure S3: Convergence of sidekick policy learning: sidekicks lead to faster convergence and better performance. Results are averaged over three different runs.

Active observation completion for longer target budgets

Figure S4 shows that training for longer glimpse budgets does indeed lead to better performance over time. We train three look-around models for budgets . As we can see, models trained for longer episodes consistently lead to better performance over time, which is a natural outcome since each agent trains for the budget it is given.

Figure S4: Training on different target budgets : Agents trained on longer episodes consistently improve performance over the entire duration of the episode. Agents trained for shorter time-horizons (lookaround (T=4, 6)) naturally tend to saturate in performance earlier, since they target a shorter budget. Agents trained on longer time-horizons lookaround (T=8) converge more slowly initially, but eventually outperform the other agents as they approach their target budget.

Additional examples of observation completion episodes

We show two more sample episodes of observation completion in Figure S5.

Figure S5: Episodes of active observation completion (continued from Figure 4 in the main paper.)

GAN refinement for viewgrid visualization

The goal of our work is to learn useful exploration policies, not to generate photorealistic reconstructions. Nonetheless, it is valuable to provide good visualizations of the agent’s internal state for interpretation of what is being learned. Therefore, we use recent advances in Generative Adversarial Networks [52] to refine the decoded reconstructions at , both for our method and the baseline. To compute cleaner looking reconstructions, we train the agents for . After the final time step, the agent’s reconstruction is refined by a pix2pix [34] network that is trained to map from the agent’s reconstructions to the ground truth viewgrids (see Figure S6 for examples).

Additional loss functions for policy learning

As mentioned in the policy learning formulation section in Materials and Methods, the act term additionally includes a loss to update the learned value network (weights ).


where is the number of data samples and is the value estimated by the value network at time for the data sample. We additionally include a standard entropy term to promote diversity in action selection and avoid converging too quickly to a suboptimal policy. The loss term and the corresponding weight update (on ) are as follows:

Figure S6: GAN refinement: Given reconstructed viewgrids (1st column) as domain A and ground truth viewgrids (3rd column) as domain B, we train a pix2pix [34] network that maps from domain A to domain B. The GAN predictions are shown in column 2. As we can see, the GAN is able to use the high level semantic structure learned by the agent and generate high quality textures using prior knowledge.