Log In Sign Up

EGO-TOPO: Environment Affordances from Egocentric Video

First-person video naturally brings the use of a physical environment to the forefront, since it shows the camera wearer interacting fluidly in a space based on his intentions. However, current methods largely separate the observed actions from the persistent space itself. We introduce a model for environment affordances that is learned directly from egocentric video. The main idea is to gain a human-centric model of a physical space (such as a kitchen) that captures (1) the primary spatial zones of interaction and (2) the likely activities they support. Our approach decomposes a space into a topological map derived from first-person activity, organizing an ego-video into a series of visits to the different zones. Further, we show how to link zones across multiple related environments (e.g., from videos of multiple kitchens) to obtain a consolidated representation of environment functionality. On EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene affordances and anticipating future actions in long-form video.


page 1

page 3

page 4

page 5

page 7

page 12

page 13

page 14


Egocentric scene context for human-centric environment understanding from video

First-person video highlights a camera-wearer's activities in the contex...

Egocentric Activity Recognition and Localization on a 3D Map

Given a video captured from a first person perspective and recorded in a...

Rethinking Learning Approaches for Long-Term Action Anticipation

Action anticipation involves predicting future actions having observed t...

Early Recognition of Human Activities from First-Person Videos Using Onset Representations

In this paper, we propose a methodology for early recognition of human a...

Video Caption Dataset for Describing Human Actions in Japanese

In recent years, automatic video caption generation has attracted consid...

Social Behavior Prediction from First Person Videos

This paper presents a method to predict the future movements (location a...

Quantifying Human Behavior on the Block Design Test Through Automated Multi-Level Analysis of Overhead Video

The block design test is a standardized, widely used neuropsychological ...

Code Repositories


Code accompanying EGO-TOPO: Environment Affordances from Egocentric Video (CVPR 2020)

view repo

1 Introduction

“The affordances of the environment are what it offers the animal, what it provides or furnishes… It implies the complementarity of the animal and the environment.”—James J. Gibson, 1979

In traditional third-person images and video, we see a moment in time captured intentionally by a photographer who paused to actively record the scene. As a result, scene understanding is largely about answering the

who/where/what questions of recognition: what objects are present? is it an indoor/outdoor scene? where is the person and what are they doing? [55, 52, 72, 42, 73, 34, 69, 18].

In contrast, in video captured from a first-person “egocentric” point of view, we see the environment through the eyes of a person passively wearing a camera. The surroundings are tightly linked to the camera-wearer’s ongoing interactions with the environment. As a result, scene understanding in egocentric video also entails how questions: how can one use this space, now and in the future? what areas are most conducive to a given activity?

Despite this link between activities and environments, existing first-person video understanding models typically ignore that the underlying environment is a persistent physical space. They instead treat the video as fixed-sized chunks of frames to be fed to neural networks 

[46, 5, 14, 65, 48, 41]. Meanwhile, methods that do model the environment via dense geometric reconstructions [63, 20, 57] suffer from SLAM failures—common in quickly moving head-mounted video—and do not discriminate between those 3D structures that are relevant to human actions and those that are not (e.g., a cutting board on the counter versus a random patch of floor). We contend that neither the “pure video” nor the “pure 3D” perspective adequately captures the scene as an action-affording space.

Our goal is to build a model for an environment that captures how people use it. We introduce an approach called Ego-Topo that converts egocentric video into a topological map consisting of activity “zones” and their rough spatial proximity. Taking cues from Gibson’s vision above, each zone is a region of the environment that affords a coherent set of interactions, as opposed to a uniformly shaped region in 3D space. See Fig. 1.

Specifically, from egocentric video of people actively using a space, we link frames across time based on (1) the physical spaces they share and (2) the functions afforded by the zone, regardless of the actual physical location. For example, for the former criterion, a dishwasher loaded at the start of the video is linked to the same dishwasher when unloaded, and to the dishwasher on another day. For the latter, a trash can in one kitchen could link to the garbage disposal in another: though visually distinct, both locations allow for the same action—discarding food. See Fig. 3.

In this way, we re-organize egocentric video into “visits” to known zones, rather than a series of unconnected clips. We show how doing so allows us to reason about first-person behavior (e.g., what are the most likely actions a person will do in the future?) and the environment itself (e.g., what are the possible object interactions that are likely in a particular zone, even if not observed there yet?).

Our Ego-Topo approach offers advantages over the existing models discussed above. Unlike the “pure video” approach, it provides a concise, spatially structured representation of the past. Unlike the “pure 3D” approach, our map is defined organically by people’s use of the space.

We demonstrate our model on two key tasks: inferring likely object interactions in a novel view and anticipating the actions needed to complete a long-term activity in first-person video. These tasks illustrate how a vision system that can successfully reason about scenes’ functionality would contribute to applications in augmented reality (AR) and robotics. For example, an AR system that knows where actions are possible in the environment could interactively guide a person through a tutorial; a mobile robot able to learn from video how people use a zone would be primed to act without extensive exploration.

On two challenging egocentric datasets, EPIC and EGTEA+, we show the value of modeling the environment explicitly for egocentric video understanding tasks, leading to more robust scene affordance models, and improving over state-of-the-art long range action anticipation models.

2 Related Work

Egocentric video

Whereas the camera is a bystander in traditional third-person vision, in first-person or egocentric vision, the camera is worn by a person interacting with the surroundings firsthand. This special viewpoint offers an array of interesting challenges, such as detecting gaze [40, 29], monitoring human-object interactions [4, 6, 51], creating daily life activity summaries [44, 39, 70, 43]

, or inferring the camera wearer’s identity or body pose 

[28, 33]. The field is growing quickly in recent years, thanks in part to new ego-video benchmarks [5, 41, 54, 62].

Recent work to recognize or anticipate actions in egocentric video adopts state-of-the-art video models from third-person video, like two-stream networks [41, 46], 3DConv models [5, 53, 48], or recurrent networks [15, 16, 61, 65]. In contrast, our model grounds first-person activity in a persistent topological encoding of the environment. Methods that leverage SLAM together with egocentric video [20, 57, 63] for activity forecasting also allow spatial grounding, though in a metric manner and with the challenges discussed above, which we illustrate in our experiments.

Structured video representations

Recent work explores ways to enrich video representations with more structure. Graph-based methods encode relationships between detected objects: nodes are objects or actors, and edges specify their spatio-temporal layout or semantic relationships (e.g., is-holding) [67, 3, 45, 71]

. Architectures for composite activity learn to encode atomic action “primitives” that are aggregated over the full time extent of the video 

[17, 30, 31], memory-based models record a recurrent network’s state [53], and 3D convnets augmented with long-term feature banks provide temporal context [68]. Unlike any of the above, our approach encodes video in a human-centric manner according to how people use a space. In our graphs, nodes are spatial zones and connectivity depends on a person’s visitation over time.

Mapping and people’s locations

Traditional maps use simultaneous localization and mapping (SLAM) to obtain dense metric measurements, viewing a space in strictly geometric terms. Instead, recent work in embodied visual navigation explores learning-based maps that leverage both visual patterns as well as geometry, with the advantage of extrapolating to novel environments (e.g., [23, 22, 59, 26, 9]). Our approach shares this motivation. However, unlike any of the above, our approach analyzes egocentric video, as opposed to controlling a robotic agent. Furthermore, whereas existing maps are derived from a robot’s exploration, our maps are derived from human behavior.

Work in ubiquitous computing tracks people to see where they spend time in an environment [36, 2], and “personal locations” manually specified by the camera wearer (e.g

., my office) can be recognized using supervised learning 

[12]. In contrast, our approach automatically discovers zones of activity from ego-video, and it links action-related zones across multiple environments.


Affordances are often focused on objects, where the goal is to anticipate how an object could be used—for example learning to model object manipulation [1, 4] or how people would grasp an object [37, 51, 10, 6]. People’s body pose can even improve object recognition [7, 19]. The affordances of scenes are less studied. Prior work explores how a third-person view of a scene suggests likely 3D body poses that would occur there [60, 66, 21] and vice versa [11]. More closely related to our work, Action Maps [56]estimate missing activity labels for regular grid cells in an environment, using matrix completion with object and scene similarities as side information. In contrast, our work considers affordances not strongly tied to a single object’s appearance, and we introduce a graph-based video encoding derived from our topological maps that benefits action anticipation.

3 Ego-Topo Approach

We aim to organize egocentric video into a map of activity “zones”—regions that afford a coherent set of interactions—and ground the video as a series of visits to these zones. This representation offers a middle ground between the “pure video” and “pure 3D” approaches discussed above, which either ignore the underlying environment by treating video as fixed-sized chunks of frames, or sacrifice important semantics of human behavior by densely reconstructing the whole environment. Instead, our model reasons jointly about the environment and the agent: which parts of the environment are most relevant for human action, what interactions does each zone afford, and how actions at these zones accomplish a goal.

Our approach is best suited to long term activities in egocentric video where zones are repeatedly visited and used in multiple ways over time. This definition applies broadly to common household and workplace environments (e.g., office, kitchen, retail store, grocery). In this work, we study kitchen environments using two public ego-video datasets (EPIC [5] and EGTEA+ [41]), since cooking activities entail frequent human-object interactions and repeated use of multiple zones. Our approach is not intended for third-person video, short video clips, or video where the environment is constantly changing (e.g., driving down a street).

Our approach works as follows. First, we train a zone localization network to discover commonly visited spaces from egocentric video (Sec. 3.1). Then, given a novel video, we use the network to assign video clips to zones and create a topological map (graph) for the environment. We further link zones based on their function across video instances to create consolidated maps (Sec. 3.2). Finally, we train models that leverage the resulting graphs to uncover environment affordances (Sec. 3.3) and anticipate future actions in long videos (Sec. 3.4).

3.1 Discovering Activity-Centric Zones

We leverage egocentric video of human activity to discover important “zones” for action. At a glance, one might attempt to discover spatial zones based on visual clustering or geometric partitions. However, clustering visual features (e.g., from a pretrained CNN) is insufficient since manipulated objects often feature prominently in ego-video, making the features sensitive to the set of objects present. For example, a sink with a cutting-board being washed vs. the same sink at a different time filled with plates would cluster into different zones. On the other hand, SLAM localization is often unreliable due to quick motions characteristic of egocentric video.111For example, on the EPIC Kitchens dataset, only of frames can be accurately registered with a state-of-the-art SLAM algorithm [50]. Further, SLAM reconstructs all parts of the environment indiscriminately, without regard for their ties to human action or lack thereof, e.g., giving the same capacity to a kitchen sink area as it gives to a random wall.

Figure 2: Localization network.

Our similarity criterion goes beyond simple visual similarity (A), allowing our network to recognize the stove-top area (despite dissimilar features of prominent objects) with a consistent homography (B), or the seemingly unrelated views at the cupboard that are temporally adjacent (C), while distinguishing between dissimilar views sampled far in time (D).

To address these issues, we propose a zone discovery procedure that links views based on both their visual content and their visitation by the camera wearer. The basis for this procedure is a localization network that estimates the similarity of a pair of video frames, designed as follows.

We sample pairs of frames from videos that are segmented into a series of action clips. Two training frames are similar if (1) they are near in time (separated by fewer than 15 frames) or from the same action clip, or (2) there are at least 10 inlier keypoints consistent with their estimated homography. The former allows us to capture the spatial coherence revealed by the person’s behavior and his/her tendency to dwell by action-informative zones, while the latter allows us to capture repeated backgrounds despite significant foreground object changes. Dissimilar frames are temporally distant views with low visual feature similarity, or incidental views in which no actions occur. See Fig. 2. We use SuperPoint [8] keypoint descriptors to estimate homographies, and euclidean distance between pretrained ResNet-152 [25] features for visual similarity.

The sampled pairs are used to train , a Siamese network with a ResNet-18 [25]

backbone, followed by a 5 layer multi-layer perceptron (MLP), using cross entropy to predict whether the pair of views is similar or dissimilar. Once trained, the network can predict the probability

that two frames in an egocentric video belong to the same zone.

Our localization network draws inspiration from the retrieval network employed in [59] to build maps for embodied agent navigation, and more generally prior work leveraging temporal coherence to self-supervise image similarity [24, 49, 32]. However, whereas the network in [59] is learned from view sequences generated by a randomly navigating agent, ours learns from ego-video taken by a human acting purposefully in an environment rich with object manipulation. In short, nearness in [59] is strictly about physical reachability, whereas nearness in our model is about human interaction in the environment.

1:A sequence of frames of a video
2:Trained localization network (Sec. 3.1)
3:Node similarity threshold and margin
4:Create a graph with node
5:for  do
6:      — Equation 2
7:     if  then
8:          Merge with node
9:          If is a consecutive frame in : Extend last visit
10:          Else: Make new visit with
11:     else if  then
12:          Create new node, add visit with , and add to
13:     end if
14:     Add edge from last node to current node
15:end for
16:Ego-Topo topological affordance graph per video
Algorithm 1 Topological affordance graph creation.

3.2 Creating the Topological Affordance Graph

With a trained localization network, we process the stream of frames in a new untrimmed, unlabeled egocentric video to build a topological map of its environment. For a video with frames , we create a graph with nodes and edges . Each node of the graph is a zone and records a collection of “visits”—clips from the egocentric video at that location. For example, a cutting board counter visited at and , for 7 and 38 frames each, will be represented by a node with visits . See Fig. 1.

We initialize the graph with a single node corresponding to a visit with just the first frame. For each subsequent frame , we compute the average frame-level similarity score for the frame compared to each of the nodes using the localization network from Sec. 3.1:


where is the center frame selected from each visit in node . If the network is confident that the frame is similar to one of the nodes, it is merged with the highest scoring node corresponding to . Alternately, if the network is confident that this is a new location (very low ), a new node is created for that location, and an edge is created from the previously visited node. The frame is ignored if the network is uncertain about the frame. Algorithm 1 summarizes the construction algorithm. Further implementation details and values , can be found in Supp.

Figure 3: Cross-map linking. Our linking strategy aligns multiple kitchens (P01, P13 etc.) by their common spaces (e.g., drawers, sinks in rows 1-2) and visually distinct, but functionally similar spaces (e.g., dish racks, crockery cabinets in row 3).

When all frames are processed, we are left with a graph of the environment per video where nodes correspond to zones where actions take place (and a list of visits to them) and the edges capture weak spatial connectivity between zones based on how people traverse them.

Importantly, beyond per-video maps, our approach also creates cross-video and cross-environment maps that link spaces by their function. We show how to link zones across 1) multiple episodes in the same environment and 2) multiple environments with shared functionality. To do this, for each node

we use a pretrained action/object classifier to compute

, the distribution of actions and active objects222An active object is an object involved in an interaction. that occur in all visits to that node. We then compute a node-level functional similarity score:


where KL is the KL-Divergence. We score pairs of nodes across all kitchens, and perform hierarchical agglomerative clustering to link nodes with functional similarity. Details about the clustering algorithm are in Supp.

Figure 4: Our methods for environment affordance learning (L) and long horizon action anticipation (R). Left panel: Our Ego-Topo graph allows multiple affordance labels to be associated with visits to zones, compared to single action labels in annotated video clips. Note that these visits may come from different videos of the same/different kitchen—which provides a more robust view of affordances (cf. Sec. 3.3). Right panel: We use our topological graph to aggregate features for each zone and consolidate information across zones via graph convolutional operations, to create a concise video representation for long term video anticipation (cf. Sec. 3.4).

Linking nodes in this way offers several benefits. First, not all parts of the kitchen are visited in every episode (video). We link zones across different episodes in the same kitchen to create a combined map of that kitchen that accounts for the persistent physical space underlying multiple video encounters. Second, we link zones across kitchens to create a consolidated kitchen map, which reveals how different kitchens relate to each other. For example, a gas stove in one kitchen could link to a hotplate in another, despite being visually dissimilar (see Fig. 3). Being able to draw such parallels is valuable when planning to act in a new unseen environment, as we will demonstrate below.

3.3 Inferring Environment Affordances

Next, we leverage the proposed topological graph to predict a zone’s affordances—all likely interactions possible at that zone. Learning scene affordances is especially important when an agent must use a previously unseen environment to perform a task. Humans seamlessly do this, e.g., cooking a meal in a friend’s house; we are interested in AR systems and robots that learn to do so by watching humans.

We know that egocentric video of people performing daily activities reveals how different parts of the space are used. Indeed, the actions observed per zone partially reveal its affordances. However, since each clip of an ego-video shows a zone being used only for a single interaction, it falls short of capturing all likely interactions at that location.

To overcome this limitation, our key insight is that linking zones within/across environments allows us to extrapolate labels for unseen interactions at seen zones, resulting in a more complete picture of affordances. In other words, having seen an interaction at a zone allows us to augment training for the affordance of at zone , if zones and are functionally linked. See Fig. 4 (Left).

To this end, we treat the affordance learning problem as a multi-label classification task that maps image features to an

-dimensional binary indicator vector

, where is the number of possible interactions. We generate training data for this task using the topological affordance graphs defined in Sec. 3.2.

Specifically, we calculate node-level affordance labels for each node :


where is the set of all interactions that occur during visit . Then, from each visit to a node , we sample a frame, generate its frame features , and use

as the affordance multi-label target. We use a 2-layer MLP for the affordance classifier, followed by a linear classifier and a sigmoid function. The network is trained using binary cross entropy loss.

At test time, given an image in an environment, this classifier directly predicts its affordance probabilities. See Fig. 4 (Left). Critically, linking frames into zones and linking zones between environments allows us to share labels across instances in a manner that that benefits affordance learning, better than models that link data purely based on geometric or visual nearness (cf. Sec. 4.1). Our Ego-Topo graph derives its connectivity naturally by observing human use of related environments.

3.4 Anticipating Future Actions in Long Video

Next, we leverage our topological affordance graphs for long horizon anticipation. In the anticipation task, we see a fraction of a long video (e.g., the first 25%), and from that we must predict what actions will be done in the future. Compared to affordance learning, which benefits from how zones are functionally related to enhance static image understanding, long range action anticipation is a video understanding task that leverages how objects are distributed among zones and how these zones are laid out to anticipate human behavior.

Recent action anticipation [13, 75, 15, 5, 53, 16, 61] predicts the immediate next action (e.g. in the next 1 second) rather than all future actions, for which an encoding of recent video information is sufficient. For long range anticipation, models need to first understand how much progress has been made on the composite activity so far, and then anticipate what actions need to be done in the future to complete it. For this, a structured representation of all past activity and affordances is essential.

Existing long range video understanding methods [30, 31, 68] build complex models over past clip features to aggregate information from the past, but do not model the environment explicitly, which we hypothesize is important for anticipating actions in long video. Our graphs provide a concise representation of observed activity, grounding frames in the spatial environment. We leverage this grounding to learn trends in interaction sequences.

Given an untrimmed video with interaction clips each involving an action with some object, we see the first clips333Experiments sweep over values of to test seeing more/less video. and predict the future action labels as a -dimensional binary vector , where is the number of action classes and for .

We generate the corresponding topological graph built up to clips, and extract features for each node using a 2-layer MLP, over averaged clip features sampled from visits to that node.

Actions at one node influence future activities in other nodes. To account for this, we enhance node features by integrating neighbor node information from the topological graph using a graph convolutional neural network (GCN) 



where are the neighbors of node , and are learnable parameters of the GCN.

The updated GCN representation for each individual node is enriched with global scene context from neighboring nodes, allowing patterns in actions across locations to be learned. For example, vegetables that are taken out of the fridge in the past are likely to be washed in the sink later. The GCN node features are then averaged to derive a representation of the video . This is then fed to a linear classifier followed by a sigmoid to predict future action probabilities, trained using binary cross entropy loss, .

At test time, given an untrimmed, unlabeled video showing the onset of a long composite activity, our model can predict the actions that will likely occur in the future to complete it. Fig. 4 (Right) illustrates the task. Further details are in Supp. As we will see in results, grounding ego-video clips in the real environment—rather than treat them as an arbitrary set of frames—provides a stronger video representation for anticipation.

4 Experiments

We evaluate the proposed topological graphs for scene affordance learning and action anticipation in long videos.

Datasets. We use two egocentric video datasets:

  • [leftmargin=*]

  • EGTEA Gaze+ [41] contains videos of 32 subjects following 7 recipes in a single kitchen. Each video captures a complete dish being prepared (e.g., potato salad, pizza), with clips annotated for interactions (e.g., open drawer, cut tomato), spanning 53 objects and 19 actions.

  • EPIC-Kitchens [5] contains videos of daily kitchen activities, and is not limited to a single recipe. It is annotated for interactions spanning 352 objects and 125 actions. Compared to EGTEA+, EPIC is larger, unscripted, and collected across multiple kitchens.

The kitchen environment is an ideal setting for our experiments, and has been the subject of several recent egocentric datasets [5, 41, 38, 64, 58, 74]. Repeated interaction with different parts of the kitchen during complex, multi-step cooking activities is a rich domain for learning affordance and anticipation models.

4.1 Ego-Topo for Environment Affordances

In this section, we evaluate how linking actions in zones and across environments can benefit affordances.

Baselines. We compare the following methods:

  • [leftmargin=*]

  • ClipAction uses clip-level action labels to learn to recognize those afforded actions which it has seen at a given location during training.

  • ActionMaps [56] estimates affordances of locations via matrix completion with side-information. It assumes that nearby locations with similar appearance/objects have similar affordances. See Supp. for details.

  • SLAM trains an action affordance classifier with the same architecture as ours, and treats all frames associated with the same grid cell on the ground plane as positives for actions observed at any time in that grid cell. locations are obtained from monocular SLAM [50], and the cell size is based on the typical scale of an interaction area following prior work [20]. It shares our insight to link actions in the same location, but is limited to a uniformly defined location grid and cannot link different environments.

  • KMeans clusters action clips using their visual features alone. We select as many clusters as there are nodes in our consolidated graph to ensure fair comparison.

  • Ours We show the three variants from Sec. 3.2 which use maps built from a single video (Ours-S), multiple videos of the same kitchen (Ours-M), and a functionally linked, consolidated map across kitchens (Ours-C).

Note that all methods use the clip-level annotated data, in addition to data from linking actions/spaces. They see the same video frames during training, only they are organized and presented with labels according to the method.

We crowd-source annotations for afforded interactions. Each instance is a view from the environment, paired with all likely interactions at that location regardless of whether the view shows it (e.g., turn-on stove, take/put pan etc. at a stove). We collect 1020 instances spanning interactions on EGTEA+ and 1155 instances over on EPIC (see Supp. for details). All methods are evaluated on this test set. We report mean average precision (mAP) over all afforded interactions, and separately for the rare and frequent ones (10 and 100 training instances, respectively).

mAP All Freq Rare All Freq Rare
ClipAction 26.8 49.7 16.1 46.3 58.4 33.1
ActionMaps [56] 21.0 40.8 13.4 43.6 52.9 31.3
SLAM 26.6 48.6 17.6 41.8 49.5 31.8
KMeans 26.7 50.1 17.4 49.3 61.2 35.9
Ours-S 28.6 52.2 19.0 48.9 61.0 35.3
Ours-M 28.7 53.3 18.9 51.6 61.2 37.8
Ours-C 29.4 54.5 19.7
Table 1: Environment affordance prediction. Our method outperforms all other methods. Note that videos in EGTEA+ are from the same kitchen, and do not allow cross-kitchen linking. Values are averaged over 5 runs.

Table 1 summarizes the results. By capturing the persistent environment in our discovered zones, and linking them across environments, our method outperforms all other methods on the affordance prediction task. All models perform better on EGTEA+, which has fewer interaction classes, contains only one kitchen, and has at least 30 training examples per afforded action (compared to EPIC where 10% of the actions have a single annotated clip).

SLAM and ActionMaps [56] rely on monocular SLAM, which introduces certain limitations. See Fig. 5 (Left). A single grid cell in the SLAM map reliably registers only small windows of smooth motion, often capturing only single action clips at each location. In addition, inherent scale ambiguities and uniformly shaped cells can result in incoherent activities placed in the same cell. Note that this limitation stands even if SLAM were perfect. Together, these factors hurt performance on both datasets, more severely affecting EGTEA+ due to the scarcity of SLAM data (only 6% accurately registered). Noisy localizations also affect the kernel computed by ActionMaps, which accounts for physical nearness as well as similarities in object/scene features. In contrast, a zone in our topological affordance graph corresponds to a coherent set of clips at different times, offering a more reliable and diverse set of actions to link, as seen in Fig. 5 (Right).

Clustering using purely visual features in KMeans helps consolidate information in EGTEA+ where all videos are in the same kitchen, but hurts performance where visual features are insufficient to capture coherent zones.

Figure 5: SLAM grid vs graph nodes. The boxes below show frames from video that are linked to grid cells in the SLAM map (Left) and nodes in our topological map (Right). See text.

Linking actions to discovered zones in our topological graph results in consistent improvements on both datasets. Moreover, aligning spaces based on function in the consolidated graph (Ours-C) provides the largest improvement, especially for rare classes that may only be seen tied to a single location.

Fig. 3 and Fig. 5 show the diverse actions captured in each node of our graph. Multiple actions at different times and from different kitchens are linked to the same zone, thus overcoming the sparsity in demonstrations and translating to a strong training signal for our scene affordance model. Fig. 6 shows example affordance predictions.

Figure 6: Predicted affordance scores for graph nodes. Our affordance model applied to node visits reveal zone affordances.

4.2 Ego-Topo for Long Term Action Anticipation

Next we evaluate how the structure of our topological graph yields better video features for long term anticipation.

Baselines. We compare against simple strategies to aggregate clip features, as well as state-of-the art methods for long range temporal aggregation [17, 30, 31].

  • [leftmargin=*]

  • TrainDist simply outputs the distribution of actions performed in all training videos, to test if a few dominant actions are repeatedly done, regardless of the video.

  • I3D uniformly samples 64 clip features and averages them to generate a video feature.

  • RNN and ActionVLAD [17] model temporal dynamics in video using LSTM [27] layers and non-uniform pooling strategies, respectively.

  • Timeception [30] and VideoGraph [31] build complex temporal models using either multi-scale temporal convolutions or attention mechanisms over learned latent concepts from clip features over large time scales.

mAP All Freq Rare All Freq Rare
TrainDist 16.5 39.1 5.7 59.1 68.2 35.2
I3D 32.7 53.3 23.0 72.1 79.3 53.3
RNN 32.6 52.3 23.3 70.4 76.6 54.3
ActionVLAD [17] 29.8 53.5 18.6 73.3 79.0 58.6
VideoGraph [31] 22.5 49.4 14.0 67.7 77.1 47.2
Timeception [30] 35.6 55.9 26.1 74.1 79.7 59.7
Ours w/o GCN 34.6 55.3 24.9 72.5 79.5 54.2
Ours 38.0 56.9 29.2 73.5 80.7 54.7
Table 2: Long term anticipation results. Our method outperforms all others on EPIC, and is best for many-shot classes on the simpler EGTEA+. Values are averaged over 5 runs.

While all the compared methods model temporal information, none explicitly model the persistent environment in video as we propose.

For evaluation, we use the first % of each untrimmed video as input, and predict all actions in the remaining video. We sweep values of representing different anticipation horizons. We report mAP over all action classes, and in low-shot (rare) and many-shot (freq) settings.

Table 2 shows the results averaged over all ’s, and Fig. 7 plots results vs. . Our model outperforms all other methods on EPIC, improving over the next strongest baseline by 2.4% mAP on all 125 action classes. On EGTEA+, our model matches the performance of models with complicated temporal aggregation schemes, and achieves the highest results for many-shot classes.

EGTEA+ has a less diverse action vocabulary with a fixed set of recipes. TrainDist, which simply outputs a fixed distribution of actions for every video, performs relatively well (59% mAP) compared to its counterpart on EPIC (only 16% mAP), highlighting that there is a core set of repeatedly performed actions in the dataset.

Among the methods that employ complex temporal aggregation schemes, Timeception improves over I3D on both datasets, though our method outperforms it on the larger EPIC dataset. Simple aggregation of node level information (Ours w/o GCN) still consistently outperforms most baselines. However, including the graph convolution operations is essential to outperform more complex models, which shows the benefit of encoding the physical layout and interactions between zones in our topological map.

Figure 7: Anticipation performance over varying prediction horizons. of the video is observed, then the actions in the remaining must be anticipated. Our model outperforms all methods for all anticipation horizons on EPIC, and has higher relative improvements when predicting further into the future.

Fig. 7 breaks down performance by anticipation horizon . On EPIC, our model is uniformly better across all prediction horizons, and it excels at predicting actions further into the future. This highlights the benefit of our environment-aware video representation. On EGTEA+, our model outperforms all other models except ActionVLAD on short range settings, but performs slightly worse at =50%. On the other hand, ActionVLAD falls short of all other methods on the more challenging EPIC data.

Figure 8: t-SNE visualization on EPIC. (a) Clip-level features from I3D; Node features for Ours (b) without and (c) with GCN. Colors correspond to different kitchens.

Fig. 8 shows a t-SNE[47] visualization of the learned feature spaces. Though trained for anticipation, features from our method discriminate between different kitchens in EPIC, without explicit kitchen labels, to encode useful environment-aware information.

5 Conclusion

We proposed a method to produce a topological affordance graph from egocentric video of human activity, highlighting commonly used zones that afford coherent actions across multiple kitchen environments. Our experiments on scene affordance learning and long range anticipation demonstrate its viability as an enhanced representation of the environment gained from egocentric video. Future work can leverage the environment affordances to guide users in unfamiliar spaces with AR or allow robots to explore a new space through the lens of how it is likely used.

Acknowledgments: We would like to thank Jiaqi Guan for their help setting up SLAM on EPIC. We would also like to thank Noureldien Hussein for their help with the Timeception [30] and Videograph [31] models.


  • [1] J. Alayrac, J. Sivic, I. Laptev, and S. Lacoste-Julien (2017) Joint discovery of object states and manipulation actions. ICCV. Cited by: §2.
  • [2] D. Ashbrook and T. Starner (2002) Learning significant locations and predicting user movement with gps. In ISWC, Cited by: §2.
  • [3] F. Baradel, N. Neverova, C. Wolf, J. Mille, and G. Mori (2018) Object level visual reasoning in videos. In ECCV, Cited by: §2.
  • [4] M. Cai, K. Kitani, and Y. Sato (2016) Understanding hand-object manipulation with grasp types and object attributes. In RSS, Cited by: §2, §2.
  • [5] D. Damen, H. Doughty, G. Maria Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, et al. (2018) Scaling egocentric vision: the epic-kitchens dataset. In ECCV, Cited by: §1, §2, §2, §3.4, §3, 2nd item, §4.
  • [6] D. Damen, T. Leelasawassuk, O. Haines, A. Calway, and W. W. Mayol-Cuevas (2014) You-do, i-learn: discovering task relevant objects and their modes of interaction from multi-user egocentric video.. In BMVC, Cited by: §2, §2.
  • [7] V. Delaitre, D. F. Fouhey, I. Laptev, J. Sivic, A. Gupta, and A. A. Efros (2012) Scene semantics from long-term observation of people. In ECCV, Cited by: §2.
  • [8] D. DeTone, T. Malisiewicz, and A. Rabinovich (2018) Superpoint: self-supervised interest point detection and description. In CVPR Workshop, Cited by: §3.1, §S4.
  • [9] K. Fang, A. Toshev, L. Fei-Fei, and S. Savarese (2019) Scene memory transformer for embodied agents in long-horizon tasks. In CVPR, Cited by: §2.
  • [10] K. Fang, T. Wu, D. Yang, S. Savarese, and J. J. Lim (2018) Demo2Vec: reasoning object affordances from online videos. In CVPR, Cited by: §2.
  • [11] D. F. Fouhey, V. Delaitre, A. Gupta, A. A. Efros, I. Laptev, and J. Sivic (2014) People watching: human actions as a cue for single view geometry. IJCV. Cited by: §2.
  • [12] A. Furnari, S. Battiato, and G. M. Farinella (2018) Personal-location-based temporal segmentation of egocentric videos for lifelogging applications. JVCIR. Cited by: §2.
  • [13] A. Furnari, S. Battiato, K. Grauman, and G. M. Farinella (2017) Next-active-object prediction from egocentric videos. JVCI. Cited by: §3.4.
  • [14] A. Furnari and G. M. Farinella (2019) What would you expect? anticipating egocentric actions with rolling-unrolling lstms and modality attention.. In

    International Conference on Computer Vision (ICCV)

    Cited by: §1.
  • [15] A. Furnari and G. M. Farinella (2019) What would you expect? anticipating egocentric actions with rolling-unrolling lstms and modality attention. ICCV. Cited by: §2, §3.4.
  • [16] J. Gao, Z. Yang, and R. Nevatia (2017) Red: reinforced encoder-decoder networks for action anticipation. BMVC. Cited by: §2, §3.4.
  • [17] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell (2017) Actionvlad: learning spatio-temporal aggregation for action classification. In CVPR, Cited by: §2, 3rd item, §4.2, Table 2.
  • [18] G. Gkioxari, R. Girshick, P. Dollár, and K. He (2018) Detecting and recognizing human-object interactions. In CVPR, Cited by: §1.
  • [19] H. Grabner, J. Gall, and L. Van Gool (2011) What makes a chair a chair?. In CVPR, Cited by: §2.
  • [20] J. Guan, Y. Yuan, K. M. Kitani, and N. Rhinehart (2019) Generative hybrid representations for activity forecasting with no-regret learning. arXiv preprint arXiv:1904.06250. Cited by: §1, §2, 3rd item, §S7.
  • [21] A. Gupta, S. Satkin, A. A. Efros, and M. Hebert (2011) From 3d scene geometry to human workspace. In CVPR, Cited by: §2.
  • [22] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Malik (2017) Cognitive mapping and planning for visual navigation. In CVPR, Cited by: §2.
  • [23] S. Gupta, D. Fouhey, S. Levine, and J. Malik (2017) Unifying map and landmark based representations for visual navigation. arXiv preprint arXiv:1712.08125. Cited by: §2.
  • [24] R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, Cited by: §3.1.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §3.1, §3.1.
  • [26] J. F. Henriques and A. Vedaldi (2018) Mapnet: an allocentric spatial memory for mapping environments. In CVPR, Cited by: §2.
  • [27] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation. Cited by: 3rd item.
  • [28] Y. Hoshen and S. Peleg (2016) An egocentric look at video photographer identity. In CVPR, Cited by: §2.
  • [29] Y. Huang, M. Cai, Z. Li, and Y. Sato (2018) Predicting gaze in egocentric video by learning task-dependent attention transition. In ECCV, Cited by: §2.
  • [30] N. Hussein, E. Gavves, and A. W. Smeulders (2019) Timeception for complex action recognition. In CVPR, Cited by: §2, §3.4, 4th item, §4.2, Table 2, §5.
  • [31] N. Hussein, E. Gavves, and A. W. Smeulders (2019) VideoGraph: recognizing minutes-long human activities in videos. ICCV Workshop. Cited by: §2, §3.4, 4th item, §4.2, Table 2, §5.
  • [32] D. Jayaraman and K. Grauman (2016) Slow and steady feature analysis: higher order temporal coherence in video. In CVPR, Cited by: §3.1.
  • [33] H. Jiang and K. Grauman (2017) Seeing invisible poses: estimating 3d body pose from egocentric video. In CVPR, Cited by: §2.
  • [34] J. Johnson, R. Krishna, M. Stark, L. Li, D. Shamma, M. Bernstein, and L. Fei-Fei (2015) Image retrieval using scene graphs. In CVPR, Cited by: §1.
  • [35] T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. ICLR. Cited by: §3.4.
  • [36] K. Koile, K. Tollmar, D. Demirdjian, H. Shrobe, and T. Darrell (2003) Activity zones for context-aware computing. In UbiComp, Cited by: §2.
  • [37] H. S. Koppula and A. Saxena (2014) Physically grounded spatio-temporal object affordances. In ECCV, Cited by: §2.
  • [38] H. Kuehne, A. Arslan, and T. Serre (2014) The language of actions: recovering the syntax and semantics of goal-directed human activities. In CVPR, Cited by: §4.
  • [39] Y. J. Lee and K. Grauman (2015) Predicting important objects for egocentric video summarization. IJCV. Cited by: §2.
  • [40] Y. Li, A. Fathi, and J. M. Rehg (2013) Learning to predict gaze in egocentric video. In ICCV, Cited by: §2.
  • [41] Y. Li, M. Liu, and J. M. Rehg (2018) In the eye of beholder: joint learning of gaze and actions in first person video. In ECCV, Cited by: §1, §2, §2, §3, 1st item, §4.
  • [42] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1.
  • [43] C. Lu, R. Liao, and J. Jia (2015) Personal object discovery in first-person videos. TIP. Cited by: §2.
  • [44] Z. Lu and K. Grauman (2013) Story-driven summarization for egocentric video. In CVPR, Cited by: §2.
  • [45] C. Ma, A. Kadav, I. Melvin, Z. Kira, G. AlRegib, and H. Peter Graf (2018) Attend and interact: higher-order object interactions for video understanding. In CVPR, Cited by: §2.
  • [46] M. Ma, H. Fan, and K. M. Kitani (2016) Going deeper into first-person activity recognition. In CVPR, Cited by: §1, §2.
  • [47] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-SNE. JMLR. Cited by: §4.2.
  • [48] A. Miech, I. Laptev, J. Sivic, H. Wang, L. Torresani, and D. Tran (2019) Leveraging the present to anticipate the future in videos. In CVPR Workshop, Cited by: §1, §2.
  • [49] H. Mobahi, R. Collobert, and J. Weston (2009) Deep learning from temporal coherence in video. In ICML, Cited by: §3.1.
  • [50] Mur-Artal and J. D. Tardós (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. Transactions on Robotics. Cited by: 3rd item, §S7, footnote 1.
  • [51] T. Nagarajan, C. Feichtenhofer, and K. Grauman (2019) Grounded human-object interaction hotspots from video. ICCV. Cited by: §2, §2.
  • [52] M. Pandey and S. Lazebnik (2011) Scene recognition and weakly supervised object localization with deformable part-based models. In ICCV, Cited by: §1.
  • [53] F. Pirri, L. Mauro, E. Alati, V. Ntouskos, M. Izadpanahkakhk, and E. Omrani (2019) Anticipation and next action forecasting in video: an end-to-end model with memory. arXiv preprint arXiv:1901.03728. Cited by: §2, §2, §3.4.
  • [54] H. Pirsiavash and D. Ramanan (2012) Detecting activities of daily living in first-person camera views. In CVPR, Cited by: §2.
  • [55] A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In CVPR, Cited by: §1.
  • [56] N. Rhinehart and K. M. Kitani (2016) Learning action maps of large environments via first-person vision. In CVPR, Cited by: §2, 2nd item, §4.1, Table 1, §S6.
  • [57] N. Rhinehart and K. M. Kitani (2017)

    First-person activity forecasting with online inverse reinforcement learning

    In ICCV, Cited by: §1, §2.
  • [58] M. Rohrbach, S. Amin, M. Andriluka, and B. Schiele (2012) A database for fine grained activity detection of cooking activities. In CVPR, Cited by: §4.
  • [59] N. Savinov, A. Dosovitskiy, and V. Koltun (2018) Semi-parametric topological memory for navigation. ICLR. Cited by: §2, §3.1.
  • [60] M. Savva, A. X. Chang, P. Hanrahan, M. Fisher, and M. Nießner (2014) SceneGrok: inferring action maps in 3d environments. TOG. Cited by: §2.
  • [61] Y. Shi, B. Fernando, and R. Hartley (2018) Action anticipation with rbf kernelized feature mapping rnn. In ECCV, Cited by: §2, §3.4.
  • [62] G. A. Sigurdsson, A. Gupta, C. Schmid, A. Farhadi, and K. Alahari (2018) Charades-ego: a large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626. Cited by: §2.
  • [63] H. Soo Park, J. Hwang, Y. Niu, and J. Shi (2016) Egocentric future localization. In CVPR, Cited by: §1, §2.
  • [64] S. Stein and S. J. McKenna (2013) Combining embedded accelerometers with computer vision for recognizing food preparation activities. In UbiComp, Cited by: §4.
  • [65] S. Sudhakaran, S. Escalera, and O. Lanz (2019) Lsta: long short-term attention for egocentric action recognition. In CVPR, Cited by: §1, §2.
  • [66] X. Wang, R. Girdhar, and A. Gupta (2017) Binge watching: scaling affordance learning from sitcoms. In CVPR, Cited by: §2.
  • [67] X. Wang and A. Gupta (2018) Videos as space-time region graphs. In ECCV, Cited by: §2.
  • [68] C. Wu, C. Feichtenhofer, H. Fan, K. He, P. Krahenbuhl, and R. Girshick (2019) Long-term feature banks for detailed video understanding. In CVPR, Cited by: §2, §3.4.
  • [69] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei (2017) Scene graph generation by iterative message passing. In CVPR, Cited by: §1.
  • [70] R. Yonetani, K. M. Kitani, and Y. Sato (2016) Visual motif discovery via first-person vision. In ECCV, Cited by: §2.
  • [71] Y. Zhang, P. Tokmakov, M. Hebert, and C. Schmid (2019) A structured model for action detection. In CVPR, Cited by: §2.
  • [72] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017) Places: a 10 million image database for scene recognition. TPAMI. Cited by: §1.
  • [73] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Torralba (2019) Semantic understanding of scenes through the ade20k dataset. IJCV. Cited by: §1.
  • [74] L. Zhou, C. Xu, and J. J. Corso (2018) Towards automatic learning of procedures from web instructional videos. In AAAI, Cited by: §4.
  • [75] Y. Zhou and T. L. Berg (2015) Temporal perception and prediction in ego-centric video. In ICCV, Cited by: §3.4.

Supplementary Material

This section contains supplementary material to support the main paper text. The contents include:

  • [leftmargin=*]

  • S1) A video demonstrating our Ego-Topo graph construction process following Algorithm 1, and our scene affordance results from Sec. 4.1 in the main paper.

  • S2) Setup and details for crowdsourced affordance annotation on EPIC and EGTEA+.

  • S3) Class-level breakdown of affordance prediction results from Table 1.

  • S4) Additional implementation details for the graph construction in Sec. 3.1.

  • S5) Implementation details for our models presented in Sec. 3.3 and Sec. 3.4.

  • S6) Implementation details for ActionMaps baseline from Sec. 4.1 (Baselines).

  • S7) Implementation details for SLAM from Sec. 4.1 (Baselines).

  • S8) Additional affordance prediction results to supplement Fig. 6.

S1 Ego-Topo demonstration video

We show examples of our graph construction process over time from egocentric videos following Algorithm 1 in the main paper. The end result is a topological map of the environment where nodes represent primary spatial zones of interaction, and edges represent commonly traversed paths between them. Further, the video demonstrates our affordance prediction results from Sec. 4.1 over the constructed topological graph. The video and interface to explore the topological graphs can be found on the project page.

Fig. S2 shows static examples of fully constructed topological maps from a single egocentric video from the test sets of EPIC and EGTEA+. Graphs built from long videos with repeated visits to nodes (P01_18, P22_07) result in a more complete picture of the environment. Short videos where only a few zones are visited (P31_14) can be linked to other graphs of the same kitchen (Sec. 3.2). The last panel shows a result on EGTEA+.

S2 Crowdsourced affordance annotations

As mentioned in Sec. 4.1, we collect annotations for afforded interactions for EPIC and EGTEA+ video frames to evaluate our affordance learning methods. We present annotators with a single frame (center frame) from a video clip and ask them to select all likely interactions that occur in the location presented in the clip. Note that these annotations are used exclusively for evaluating affordance models — they are trained using single-clip interaction labels (See Sec. 3.3).

On EPIC, we select 120 interactions (verb-noun pairs) over the 15 most frequent verbs and for common objects that afford multiple interactions. For EGTEA+, we select all 75 interactions provided by the dataset. A list of all these interactions is in Table S1. Each image is labeled by 5 distinct annotators, and only labels that 3 or more annotators agree on are retained. This results in 1,020 images for EGTEA+ and 1,155 images for EPIC. Our annotation interface is shown in Fig. S1 (top panel), and examples of resulting annotations are shown in Fig. S1 (bottom panel).

Figure S1: Crowdsourcing affordance annotations. (Top panel) Affordance annotation interface. Users are asked to identify all likely interactions at the given location. 6 out of 15 afforded actions are shown here. (Bottom panel) Example affordance annotations by Mechanical Turk annotators. Only annotations where 3+ workers agree are retained.
EPIC put/take: pan, spoon, lid, board:chopping, bag, oil, salt, towel:kitchen, scissors, butter; open/close: tap, cupboard, fridge, lid, bin, salt, kettle, milk, dishwasher, ketchup; wash: plate, spoon, pot, sponge, hob, microwave, oven, scissors, mushroom; cut: tomato, pepper, chicken, package, cucumber, chilli, ginger, sandwich, cake; mix: pan, onion, spatula, salt, egg, salad, coffee, stock; pour: pan:dust, onion, water, kettle, milk, rice, egg, coffee, liquid:washing, beer; throw: onion, bag, bottle, tomato, box, coffee, towel:kitchen, paper, napkin; dry: pan, plate, knife, lid, glass, fork, container, hob, maker:coffee; turn-on/off: kettle, oven, machine:washing, light, maker:coffee, processor:food, switch, candle; turn: pan, meat, kettle, hob, filter, sausage; shake: pan, hand, pot, glass, bag, filter, jar, towel; peel: lid, potato, carrot, peach, avocado, melon; squeeze: sponge, tomato, liquid:washing, lemon, lime, cream; press: bottle, garlic, dough, switch, button; fill: pan, glass, cup, bin, bottle, kettle, squash
EGTEA+ inspect/read: recipe; open: fridge, cabinet, condiment_container, drawer, fridge_drawer, bread_container, dishwasher, cheese_container, oil_container; cut: tomato, cucumber, carrot, onion, bell_pepper, lettuce, olive; turn-on: faucet; put: eating_utensil, tomato, condiment_container, cucumber, onion, plate, bowl, trash, bell_pepper, cooking_utensil, paper_towel, bread, pan, lettuce, pot, seasoning_container, cup, bread_container, cutting_board, sponge, cheese_container, oil_container, tomato_container, cheese, pasta_container, grocery_bag, egg; operate: stove, microwave; move-around: eating_utensil, bowl, bacon, pan, patty, pot; wash: eating_utensil, bowl, pan, pot, hand, cutting_board, strainer; spread: condiment; divide/pull-apart: onion, paper_towel, lettuce; clean/wipe: counter; mix: mixture, pasta, egg; pour: condiment, oil, seasoning, water; compress: sandwich; crack: egg; squeeze: washing_liquid
Table S1: List of afforded interactions annotated for EPIC and EGTEA+.

S3 Average precision per class for affordances

As noted in our experiments in Sec. 4.1, our method performs better on low-shot classes. Fig. S3 shows a class-wise breakdown of improvements achieved by our model over the ClipAction model on the scene affordance task. Among the interactions, those involving objects that are typically tied to a single physical location, highlighted in red (e.g., fridges, stoves, taps etc.), are easy to predict, and do not improve much. Our method works especially well for interaction classes that occur in multiple locations (e.g., put/take spoons/butter, pour rice/egg etc.), which are linked in our topological graph.

S4 Additional implementation details for Ego-Topo graph creation

We provide additional implementation details for our topological graph construction procedure from Sec. 3.1 and Sec. 3.2 in the main paper.

Homography estimation details (Sec. 3.1). We generate SuperPoint keypoints [8] using the pretrained model provided by the authors. For each pair of frames, we calculate the homography using 4 random points, and use RANSAC to maximize the number of inliers. We use inlier count as a measure of similarity.

Similarity threshold and margin values in Algorithm 1 (). We fix our similarity threshold to ensure that only highly confident views are included in the graph. We select a large margin to make sure that irrelevant views are readily ignored.

Node linking details (Sec. 3.2). We use hierarchical agglomerative clustering to link nodes across different environments based on functional similarity. We set the similarity threshold below which nodes will not be linked as 40% of the average pairwise similarity between every node. We found that threshold values around this range (40-60%) produced a similar number of clusters, while values beyond them resulted in too few nodes linked, or all nodes collapsing to a single node.

Other details. We subsample all videos to 6 fps. To calculate in Equation 2, we average scores for a window of 9 frames around the current frame, and we uniformly sample a set of 20 frames for each visit for robust score estimates.

Figure S2: Ego-Topo graphs constructed directly from egocentric video. Each panel shows the output of Algorithm 1 for videos in EPIC (panels 1-3) and EGTEA+ (panel 4). Connectivity represents frequent paths taken by humans while using the environment. Edge thickness represents how frequently they are traversed.
Figure S3: Class-wise breakdown of average precision for affordance prediction on EPIC. Our method outperforms the ClipAction baseline on the majority of classes. Single clip labels are sufficient for interactions that are strongly tied to a single physical location (red), whereas our method works particularly well for classes with objects that can be interacted with at multiple locations.

S5 Training details for affordance and long term anticipation experiments

We next provide additional implementation and training details for our experiments in Sec. 4 of the main paper.

Affordance learning experiments in Sec. 4.1.

For all models, we use ImageNet pretrained ResNet-152 features for frame feature inputs. As mentioned in Sec. 


, we use binary cross entropy (BCE) for our loss function. For original clips labeled with a single action label, we evaluate BCE for only the positive class, and mask out the loss contributions for all other classes. Adam with learning rate 1e-4, weight decay 1e-6, and batch size 256 is used to optimize the models parameters. All models are trained for 20 epochs, and learning rate is annealed once to 1e-5 after 15 epochs.

Long term action anticipation experiments in Sec. 4.2. We pretrain an I3D model with ResNet-50 as the backbone on the original clip-level action recognition task for both EPIC-Kitchen and EGTEA+. Then, we extract the features from the pretrained I3D model for each set of 64 frames as the clip-level features. These features are used for all models in our long-term anticipation experiments.

Among the baselines, we implement TrainDist, I3D, RNN, and ActionVlad. For Timeception, we import the authors’ module444 and for Videograph, we directly use the authors’ implementation555 with our features as input.

For EPIC, all models are trained for 100 epochs with the learning rate starting from 1e-3 and decreased by a factor of 0.1 after 80 epochs. We use Adam as the optimization method with weight decay 1e-5 and batch size 256. For the smaller EGTEA+ dataset, we follow the same settings, except we train for 50 epochs.

S6 ActionMaps implementation details

For the ActionMaps method, we follow Rhinehart and Kitani [56]

making a few necessary modifications for our setting. We use cosine similarity between pretrained ResNet-152 features to measure semantic similarity between locations as side information, instead of object and scene classifier scores, to be consistent with the other evaluated methods. We use the latent dimension 256 for the matrix factorization, and set

for the RWNMF optimization objective in [56]. We use location information in the similarity kernel only when it is available, falling back to just feature similarity when it is not (due to SLAM failures). We use this baseline in our experiments in Sec. 4.1.

S7 SLAM implementation details

We generate monocular SLAM trajectories for egocentric videos using the code and protocol from [20]. Specifically, we use ORB-SLAM2 [50] to extract trajectories for the full video, and drop timesteps where either tracking is unreliable or lost. We scale all trajectories by the maximum movement distance for each kitchen, so that (x, y) coordinates are bounded between [0, 1]. We create a uniform grid of squares, each with edge length 0.2 . We use this grid to accumulate trajectories for the SLAM baseline and to construct the ActionMaps matrix in our experiments in Sec. 4.1. We use the same process for EPIC and EGTEA+, with camera parameters from the dataset authors.

S8 Additional node affordance results

Fig. S4 provides more examples of affordance predictions by our model on zones (nodes) in our topological map, to supplement Fig. 6 in the main paper. For clarity, we show 8 interactions on EPIC (top panel) and EGTEA+ (bottom panel), out of a total of 120 and 75 interactions respectively.

Figure S4: Additional zone affordance prediction results. Results on EPIC (top panel) and EGTEA+ (bottom panel).